Rong Ge

/ January 22, 2021/

When:
March 23, 2021 @ 12:00 pm – 1:00 pm
2021-03-23T12:00:00-04:00
2021-03-23T13:00:00-04:00

Title: A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

Abstract: While over-parameterization is widely believed to be crucial for the success of optimization for the neural networks, most existing theories on over-parameterization do not fully explain the reason –they either work in the Neural Tangent Kernel regime where neurons don’t move much, or require an enormous number of neurons. In practice, when the data is generated using a teacher neural network, even mildly over-parameterized neural networks can achieve 0 loss and recover the directions of teacher neurons. In this paper we develop a local convergence theory for mildly over-parameterized two-layer neural net. We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an over-parameterized two-layer neural network will converge to one of teacher neurons, and the loss will go to 0. Our result holds for any number of student neurons as long as it is at least as large as the number of teacher neurons, and our convergence rate is independent of the number of student neurons. A key component of our analysis is the new characterization of local optimization landscape –we show the gradient satisfies a special case of Lojasiewiczproperty which is different from local strong convexity or PL conditions used in previous work.

Bio: RongGeis an assistant professor at Duke University. He received his Ph.D. from Princeton University, advised by Sanjeev Arora. Before joining DukeRongGewas a post-docat Microsoft Research New England.RongGe’sresearch focuses on proving theoretical guarantees for modern machine learning algorithms, and understanding the optimization for non-convex optimization and in particular neural networks.

Share this Post