Winter 2024

February 27th -Dmitriy Drusvyatskiy, “Optimization for Large-Scale Learning: Beyond Smoothness and Convexity”

February 20th – Pablo Barcelo, “Complex Query Answering on Graph Databases and Knowledge Graphs”

February 13th: Ran Chen, “Doubly High-Dimensional Contextual Bandits: An Interpretable Model for Joint Assortment and Pricing”

February 6th: Samuel Stanton, “Protein Design with Guided Discrete Diffusion”

January 30th: Sanjukta Krishnagopal, “AI for Science and Simplicial Complexes: Topology and Dynamics in Data-Driven Systems”

Fall 2023

September 12th: Stephanie Hicks, “Scalable Computational Methods and Software for Single-Cell and Spatial Data Science”

September 19th: Lev Reyzin, “Towards Building the Mathematical Foundations of Data Science”

October 3rd: Eliza O’Reilly, “The Stochastic Geometry of Randomized Decision Trees and Forests

October 10th: Florian Schaefer, “Solvers, Models, Learners: Statistical Inspiration for Scientific Computing

October 24th: Brian Ziebart, “Superhuman Imitation Learning

October 31st: Sijia Geng, “Scalable Optimization and Analysis Methods in Energy and Electrified Transportation Systems

November 7th: Greg Canal, “Enhancing Human-computer Interfacing in Artificial Intelligence Systems

November 14th: Morgane Austern, “Novel results in uncertainty quantification: From cross-validation to tail bounds

November 28th: Konstantin Mishchenko, “Parameter-Free Adaptive Optimization

December 5th: Pratik Chaudary

Spring 2023

January 24th: Holden Lee, “Score-based generative modeling: Convergence theory”

January 31th: Prajakta Bedekar, “Optimal Time-Dependent Classification for Diagnostic Testing”

February 7th: David Hogg, “Bringing a classical-physics perspective to machine learning (and everything)”

February 14th: Avanti Athreya, “Discovering underlying dynamics in time series of networks”

February 21th: Anastasios N. Angelopoulos, “Prediction-Powered Inference”

February 28th: Beatrice Bevilacqua, “Subgraphs to the rescue: how to view graphs as bags of subgraphs to enhance the capabilities of GNNs”

March 7th: Elad Hazan, “Meta Optimization and Online Control”

March 14th: Peter Olver, “Reassembly and analysis of broken objects”

Fall 2022

August 30th: Yoav Wald, “Towards Invariant Learning”

September 6th: Josué Tonelli-Cueto, “Condition numbers and probability for explaining algorithms in computational geometry”

September 13th: Harsh Parikh, “Interpretable Causal Inference for High-Stakes Decision Making”

September 20th: Sammy Khalife, “Neural networks with linear threshold activations: structure and algorithms”

September 27th: Pratik Chaudhari, “Does the Data Induce Capacity Control in Deep Learning?”

October 4th: Jia (Kevin) Liu: “Mitigating Data and System Heterogeneity and Taming Fat-Tailed Noise in Federated Learning”

October 11th: Siqi Zhang, “The Complexity of Nonconvex Minimax Optimization: Fundamental Limits and Generalization”

October 18th: Leonardo Cotta, “Causal Lifting and Link Prediction”

October 25th: Yongsoo Kim, “Anatomical cell type mapping in the whole mouse brain and common coordinate framework”

November 1st: Dorsa Sadigh, “Learning Robot Policies from Non-Traditional Sources of Human Data”

November 8th, Demián Wassermann, “Specificity in Cognitive Neuroimaging: Pushing the Envelope in Meta- Analyses by Harnessing Rich Probabilistic Logical Models”

November 11th: Rahul Parhi, “Regularizing Neural Networks via Radon-Domain Total Variation”

November 15th, Facundo Mémoli, “Gromov-like distances between spheres”

November 29th, Amit Singer, “Heterogeneity analysis in cryo-EM by covariance estimation and manifold learning”

December 6th, Irene Waldspurger, “Sketching semidefinite programs for super-resolution problems”

December 13th, Lachlan MacDonald, “Towards a formal theory of deep optimisation”

Spring 2022

February 1st: Eliza O’Reilly: “Stochastic and Convex Geometry for the Analysis of Complex Data”

February 8th: Luana Ruiz: “Machine Learning on Large-Scale Graphs”

February 15th: Mahdi Soltanolkotabi, “Towards Stronger Foundations for AI and its Applications to the Sciences”

February 22nd: Ben Blum-Smith, “Orbit recovery problems for compact groups”

March 1st: Qi Lei, “Theoretical Foundations of Pre-trained Models”

March 8th: Nicolas Loizou, “Stochastic Iterative Methods for Smooth Games: Practical Variants and Convergence Guarantees”

March 15th: Rebekka Burkholz, “Pruning deep neural networks for lottery tickets”

March 22nd: Break

March 29th: Samory Kpotufe, “Adaptivity in Domain Adaptation and Friends”

April 5th: Alex Cloninger, “Data Representation Learning from a Single Pass of the Data”

April 12th: Greg Ongie, “A function space view of infinite-width neural networks”

April 19th: Alexandria Volkening, “Modeling and topological data analysis of zebrafish-skin patterns”

April 26th: Nicolas Charon, “Diffeomorphic registration and metamorphosis in the space of varifolds”

May 3rd: Mauricio Delbracio, “Improving realism in natural image restoration”

May 10th: Nicolas Fraiman,“Clustering and classification based on disjoint features”

May 17th: Gal Mishne, “Looking deep into the spectrum of the Graph Laplacian”

Fall 2021

August 31st: Ben Grimmer: “Radial Duality: Scalable, Projection-Free Optimization Methods”

September 7th: Praneeth Netrapalli, “Pitfalls of Deep Learning”

September 14: Sara A. Solla, “Population Dynamics in Neural Systems”

September 21st: Mário Figueiredo, “Three Recent Short Stories About Image Denoising”

September 28th: Betsy Ogburn, “Disentangling confounding and nonsense associations due to dependence”

October 5th: Alex Dimakis, “Generative models and Unsupervised methods for Inverse problems”

October 12th: Bruno Olshausen, “Perception as Inference”

October 19th: Claire Boyer, “Sampling rates for l1-synthesis”

October 26th: Laura Balzano, “Finding low-dimensional structure in messy data”

November 2nd: Pragya Sur, “Precise high-dimensional asymptotics for AdaBoost via max-margins & min-norm interpolants”

November 9th: Jean-Philippe Vert, “Deep learning for biological sequences”

November 16th: Anqi (Angie) Liu: “Towards Trustworthy AI: Distributionally Robust Learning under Data Shift”

November 23rd: Fall Break

November 30th: Soledad Villar, “Equivariant Machine Learning, Structured Like Classical Physics”

Spring 2021

January 26: Surya Ganguli – Weaving together machine learning, theoretical physics, and neuroscience

February 2: Wiro Niessen- Biomedical Imaging and Genetic Data Analysis With AI: Towards Precision Medicine

February 16: Andrej Risteski- Representational aspects of depth and conditioning in normalizing flows

February 23: Mario Sznaier- Easy, hard or convex?: the role of sparsity and structure in learning dynamical models

March 2: Lalitha Sankar- Alpha-loss: A Tunable Class of Loss Functions for Robust Learning

March 9: Daniella Witten- Selective inference for trees

March 16: Smita Krishnaswamy- Geometric and Topological Approaches to Representation Learning in Biomedical Data

March 23: Rong Ge- A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

March 30: Juan Carlos Niebles- Event Understanding: a Cornerstone of Visual Intelligence

April 6: Maria De-Arteaga: Mind the gap: From predictions to ML-informed decisions

April 13: Kristen Grauman- Sights, sounds, and space: Audio-visual learning in 3D environments

April 20: Su-In Lee: Explainable Artificial Intelligence for Biology and Health

April 27: Sharon Yixuan Li: Towards Reliable Open-world Machine Learning

Fall 2020

September 1st: Enzo Ferrante – “Towards anatomically plausible medical image segmentation, registration and reconstruction”

September 8th: Anima Anandkumar- “Bridging the Gap Between Artificial and Human Intelligence: Role of Feedback”

September 15th: Giles Hooker – “Ensembles of Trees and CLT’s: Inference and Machine Learning”

September 22nd: Jelena Diakonikolas – “On Min-Max Optimization and Halpern Iteration”

September 29th: Tom Goldstein – Evasion and poisoning attacks on neural networks: theoretical and practical perspectives

October 6th: Daniel Hsu – Contrastive learning, multi-view redundancy, and linear models

October 13th: Kate Saenko – Learning from Small and Biased Datasets

October 27th: Rama Chellappa – Generations of Generative Models for Images and Videos with Applications

November 3rd: Adam Charles – Data Science in Neuroscience: From Sensors to Theory

November 10th: SueYeon Chung – Emergence of Separable Geometry in Deep Networks and the Brain

November 17th: Kimia Ghobadi – Inverse Optimization

November 24th: Poorya Mianjy – Understanding the Algorithmic Regularization due to Dropout

December 1: Eva Dyer – Representation learning and alignment in biological and artificial neural networks

December 15: Ida Momennejad – Multi-scale Predictive Representations

Spring 2020

April 21st: Stefanie Jegelka – “Representation and Learning in Graph Neural Networks”

April 28th: Mads Nielsen & Akshay Pai – “Risk assessment of severe Covid-19 infection”

May 26th: Ravi Shankar & Ambar Pal – “Non-Parallel Emotion Conversion in Speech via Variational Cycle-GAN” & “A Regularization view of Dropout in Neural Networks”

July 16th: Eli Sherman – “Identification Theory in Segregated Graph Causal Models”