Rahul Parhi, Regularizing Neural Networks via Radon-Domain Total Variation

/ October 31, 2022/

When:
November 11, 2022 @ 11:00 am – 12:00 pm
2022-11-11T11:00:00-05:00
2022-11-11T12:00:00-05:00

Rahul Parhi, Postdoctoral Researcher
Biomedical Imaging Group at École Polytechnique Fédérale de Lausanne (EPFL)

Abstract:  What kinds of functions do neural networks learn? Why can neural networks perform well in high dimensional settings? What is the right way to regularize a neural network?

This talk will answer these questions and provide mathematical explanations of existing design and training strategies that have evolved largely through experiments. This includes new insights into the importance of weight decay, linear layers, and skip connections, as well as a deeper understanding of sparsity and the curse of dimensionality. Our main result is a representer theorem that states that neural networks are exact solutions to nonparametric learning problems in “mixed variation” function spaces. These results are inspired from classical results in spline theory, and in the univariate case these neural network solutions are exactly the locally adaptive splines of nonparametric statistics and the function spaces are related to classical bounded variation spaces. In the multivariate case these spaces are characterized by total variation in the Radon domain and include functions that are very regular in all but a small number of directions. Spatial inhomogeneity of this sort leads to a fundamental gap between the performance of neural networks and linear methods (which include kernel methods), explaining why neural networks can outperform classical methods for high-dimensional tasks. This theory suggests new neural network architectures that include linear layers and new regularization schemes.

Bio: Rahul Parhi is currently a postdoctoral researcher with the Biomedical Imaging Group at École Polytechnique Fédérale de Lausanne (EPFL). He completed his PhD in electrical engineering at the University of Wisconsn-Madison in 2022, where he was supported by an NSF graduate reserach fellowship. His research interests include applications of functional and harmonic analysis to problems in signal processing and data science, in particular, the mathematical aspects of neural networks.

 

Clark Hall 110 & over Zoom

https://wse.zoom.us/j/97286109836

Meeting ID: 972 8610 9836
One tap mobile
+13017158592,,97286109836# US (Washington DC)
+13092053325,,97286109836# US

Dial by your location
+1 301 715 8592 US (Washington DC)
+1 309 205 3325 US
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 646 931 3860 US
+1 507 473 4847 US
+1 564 217 2000 US
+1 669 444 9171 US
+1 669 900 6833 US (San Jose)
+1 689 278 1000 US
+1 719 359 4580 US
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 360 209 5623 US
+1 386 347 5053 US
Meeting ID: 972 8610 9836
Find your local number: https://wse.zoom.us/u/abSzmvNVa4

Join by SIP
[email protected]

Join by H.323
162.255.37.11 (US West)
162.255.36.11 (US East)
115.114.131.7 (India Mumbai)
115.114.115.7 (India Hyderabad)
213.19.144.110 (Amsterdam Netherlands)
213.244.140.110 (Germany)
103.122.166.55 (Australia Sydney)
103.122.167.55 (Australia Melbourne)
149.137.40.110 (Singapore)
64.211.144.160 (Brazil)
149.137.68.253 (Mexico)
69.174.57.160 (Canada Toronto)
65.39.152.160 (Canada Vancouver)
207.226.132.110 (Japan Tokyo)
149.137.24.110 (Japan Osaka)
Meeting ID: 972 8610 9836

 

 

Share this Post