AMS Department Seminar with Speaker: Francesco Sanna Passino

April 11, 2024
Baltimore, MD, USA

Room: Olin 305

Title: Low-rank models for dynamic multiplex graphs and vector autoregressive processes

Abstract: This talk discusses low-rank models for two different types of data structures: dynamic multiplex graphs and panels of multivariate time series. The first part of the talk will present a doubly unfolded adjacency spectral embedding (DUASE) method for networks evolving over time, with different edge types, commonly known as multiplex networks. Statistical properties of DUASE will be discussed, and links with commonly used statistical models for clustering graphs will be presented. The second part of the talk will cover the case of a panel of multivariate time series where there is co-movement between the panel components, modelled via a vector autoregressive process. A Network Informed Restricted Vector Auto-Regressive (NIRVAR) process is proposed, with an algorithm that gives a low dimensional latent embedding of each component of the panel. Clustering in this latent space is then used to recover the non-zero entries of the VAR coefficient matrix. The proposed model outperforms alternative approaches in terms of prediction and inference in simulation studies and real-data examples in applications in finance and healthcare.

Join via Zoom link: https://wse.zoom.us/j/94601022340

Deep Math 2023 Conference on the Mathematical Theory of Deep Neural Networks

Nov 16 – 17, 2023
Baltimore, MD, USA

Recent advances in deep neural networks (DNNs), combined with open, easily-accessible implementations, have made DNNs a powerful, versatile method used widely in both machine learning and neuroscience. These advances in practical results, however, have far outpaced a formal understanding of these networks and their training. The dearth of rigorous analysis for these techniques limits their usefulness in addressing scientific questions and, more broadly, hinders systematic design of the next generation of networks. Recently, long-past-due theoretical results have begun to emerge from researchers in a number of fields. The purpose of this conference is to give visibility to these results, and those that will follow in their wake, to shed light on the properties of large, adaptive, distributed learning architectures, and to revolutionize our understanding of these systems.

More information athttps://deepmath-conference.com

Eventbrite tickets: https://www.eventbrite.com/e/deepmath-2023-tickets-722039879717?aff=oddtdtcreator