Sammy Khalife: “Neural networks with linear threshold activations: structure and algorithms”

/ August 19, 2022/

When:
September 20, 2022 @ 12:00 pm – 1:15 pm
2022-09-20T12:00:00-04:00
2022-09-20T13:15:00-04:00

Tuesdays, 12pm-1:15pm

Held virtually in person at Clark 110 & over Zoom

Check for event details: https://www.minds.jhu.edu/events/calendar/

 

“Neural networks with linear threshold activations: structure and algorithms”

Sammy Khalife, PhD

Post-Doctoral Fellow

Johns Hopkins University

 

Abstract: In this talk I will present new results on neural networks with linear threshold activation functions. The class of functions that are representable by such neural networks can be precisely described, and 2 hidden layers are necessary and sufficient to represent any function representable in the class. This is a surprising result in the light of recent exact representability investigations for neural networks using other popular activation functions like rectified linear units (ReLU). I will also discuss some upper and lower bounds on the sizes of the neural networks required to represent any function in the class. We also were able to design an algorithm to solve the empirical risk minimization (ERM) problem to global optimality for these neural networks with a fixed architecture. The algorithm’s running time is polynomial in the size of the data sample, if the input dimension and the size of the network architecture are considered fixed constants. The algorithm is unique in the sense that it works for any architecture with any number of layers, whereas previous polynomial time globally optimal algorithms work only for restricted classes of architectures. Finally, I will present a new class of neural networks called shortcut linear threshold networks. To the best of our knowledge, this way of designing neural networks have not been explored before in the literature. We show that these neural networks have several desirable theoretical properties.

 

Biography: Dr. Khalife joined the Applied Mathematics and Statistics Department of Johns Hopkins University in Fall 2021 as a postdoctoral fellow. His work is related to discrete optimization and theoretical deep learning, and he is interested in the formal expressivity and complexity of neural networks. Dr. Khalife has a MSc from ENSTA Paristech with a specialization in Mathematical Optimization jointly with Paris 1 Sorbonne, and a MSc from Ecole Normale Supérieure Paris Saclay, in the Mathematics of Vision and Learning. Dr. Khalife received his PhD from Ecole Polytechnique in Computer Science in 2020.

 

Join Zoom Meeting

https://wse.zoom.us/j/98624413365

 

Meeting ID: 986 2441 3365

One tap mobile

+13017158592,,98624413365# US (Washington DC)

+16469313860,,98624413365# US

 

Dial by your location

+1 301 715 8592 US (Washington DC)

+1 646 931 3860 US

+1 309 205 3325 US

+1 312 626 6799 US (Chicago)

+1 646 558 8656 US (New York)

+1 669 900 6833 US (San Jose)

+1 719 359 4580 US

+1 253 215 8782 US (Tacoma)

+1 346 248 7799 US (Houston)

+1 386 347 5053 US

+1 564 217 2000 US

+1 669 444 9171 US

Meeting ID: 986 2441 3365

Find your local number: https://wse.zoom.us/u/asoOElnUp

 

Join by SIP

[email protected]

 

Join by H.323

162.255.37.11 (US West)

162.255.36.11 (US East)

115.114.131.7 (India Mumbai)

115.114.115.7 (India Hyderabad)

213.19.144.110 (Amsterdam Netherlands)

213.244.140.110 (Germany)

103.122.166.55 (Australia Sydney)

103.122.167.55 (Australia Melbourne)

149.137.40.110 (Singapore)

64.211.144.160 (Brazil)

149.137.68.253 (Mexico)

69.174.57.160 (Canada Toronto)

65.39.152.160 (Canada Vancouver)

207.226.132.110 (Japan Tokyo)

149.137.24.110 (Japan Osaka)

Meeting ID: 986 2441 3365

 

Share this Post