Michael Mahoney: Continuous Network Models for Sequential Predictions

/ January 10, 2022/

When:
January 20, 2022 @ 1:00 pm – 1:45 pm
2022-01-20T13:00:00-05:00
2022-01-20T13:45:00-05:00

Abstract: Data-driven machine learning methods such as those based on deep learning are playing a growing role in many areas of science and engineering for modeling time series, including fluid flows, and climate data. However, deep neural networks are known to be sensitive to various adversarial environments, and thus out of the box models and methods are often not suitable for mission critical applications. Hence, robustness and trustworthiness are increasingly important aspects in the process of engineering new neural network architectures and models. In this talk, I am going to view neural networks for time series prediction through the lens of dynamical systems. First, I will discuss deep dynamic autoencoders and argue that integrating physics-informed energy terms into the learning process can help to improve the generalization performance as well as robustness with respect to input perturbations. Second, I will discuss novel continuous-time recurrent neural networks that are more robust and accurate than other traditional recurrent units. I will show that leveraging classical numerical methods, such as the higher-order explicit midpoint time integrator, improves the predictive accuracy of continuous-time recurrent units as compared to using the simpler one-step forward Euler scheme. Finally, I will discuss extensions such as multiscale ordinary differential equations for learning long-term sequential dependencies and a connection between recurrent neural networks and stochastic differential equations.

Bio: Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI).  He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory.  He works on algorithmic and statistical aspects of modern large-scale data analysis.  Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, scalable stochastic optimization, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis.  He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department.  Among other things, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council’s Committee on the Analysis of Massive Data, he co-organized the Simons Institute’s fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute’s 2016 PCMI Summer Session on The Mathematics of Data, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley.  More information is available at https://www.stat.berkeley.edu/~mmahoney/.

Register here

Share this Post