MINDS 2021 Winter Symposium-Wei Hu

/ February 2, 2021/

When:
February 9, 2021 @ 10:45 am – 11:45 am
2021-02-09T10:45:00-05:00
2021-02-09T11:45:00-05:00

Title– Opening the Black Box: Towards Theoretical Understanding of Deep Learning

Abstract– Despite the phenomenal empirical successes of deep learning in many application
domains, its underlying mathematical mechanisms remain poorly understood. Mysteriously,
deep neural networks in practice can often fit training data perfectly and generalize
remarkably well to unseen test data, despite highly non-convex optimization landscapes and
significant over-parameterization. Moreover, deep neural networks show extraordinary ability
to perform representation learning: feature representation extracted from a neural network can
be useful for other related tasks.
In this talk, I will present our recent progress on building the theoretical foundations of deep
learning. First, I will show that gradient descent on deep linear neural networks induces an
implicit regularization effect towards low rank, which explains the surprising generalization
behavior of deep linear networks for the low-rank matrix completion problem. Next, turning to
nonlinear deep neural networks, I will talk about a line of studies on wide neural networks,
where by drawing a connection to the neural tangent kernels, we can answer various questions
such as how training loss is minimized, why trained network can generalize, and why certain
component in the network architecture is useful; we also use theoretical insights to design a
new simple and effective method for training on noisily labeled datasets. Finally, I will
analyze the statistical aspect of representation learning, and identify conditions that enable
efficient use of training data, bypassing a known hurdle in the i.i.d. tasks setting.

 

The recording is available here. 

Share this Post