Past MINDS Seminars

Spring 2020

April 21st: Stefanie Jegelka – “Representation and Learning in Graph Neural Networks”

April 28th: Mads Nielsen & Akshay Pai – “Risk assessment of severe Covid-19 infection”

May 26th: Ravi Shankar & Ambar Pal – “Non-Parallel Emotion Conversion in Speech via Variational Cycle-GAN” & “A Regularization view of Dropout in Neural Networks”

July 16th: Eli Sherman – “Identification Theory in Segregated Graph Causal Models”

Fall 2020

September 1st: Enzo Ferrante – “Towards anatomically plausible medical image segmentation, registration and reconstruction”

September 8th: Anima Anandkumar- “Bridging the Gap Between Artificial and Human Intelligence: Role of Feedback”

September 15th: Giles Hooker – “Ensembles of Trees and CLT’s: Inference and Machine Learning”

September 22nd: Jelena Diakonikolas – “On Min-Max Optimization and Halpern Iteration”

September 29th: Tom Goldstein – Evasion and poisoning attacks on neural networks: theoretical and practical perspectives

October 6th: Daniel Hsu – Contrastive learning, multi-view redundancy, and linear models

October 13th: Kate Saenko – Learning from Small and Biased Datasets

October 27th: Rama Chellappa – Generations of Generative Models for Images and Videos with Applications

November 3rd: Adam Charles – Data Science in Neuroscience: From Sensors to Theory

November 10th: SueYeon Chung – Emergence of Separable Geometry in Deep Networks and the Brain

November 17th: Kimia Ghobadi – Inverse Optimization

November 24th: Poorya Mianjy – Understanding the Algorithmic Regularization due to Dropout

December 1: Eva Dyer – Representation learning and alignment in biological and artificial neural networks

December 15: Ida Momennejad – Multi-scale Predictive Representations

Spring 2021

January 26: Surya Ganguli – Weaving together machine learning, theoretical physics, and neuroscience

February 2: Wiro Niessen- Biomedical Imaging and Genetic Data Analysis With AI: Towards Precision Medicine

February 16: Andrej Risteski- Representational aspects of depth and conditioning in normalizing flows

February 23: Mario Sznaier- Easy, hard or convex?: the role of sparsity and structure in learning dynamical models

March 2: Lalitha Sankar- Alpha-loss: A Tunable Class of Loss Functions for Robust Learning

March 9: Daniella Witten- Selective inference for trees

March 16: Smita Krishnaswamy- Geometric and Topological Approaches to Representation Learning in Biomedical Data

March 23: Rong Ge- A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

March 30: Juan Carlos Niebles- Event Understanding: a Cornerstone of Visual Intelligence

April 6: Maria De-Arteaga: Mind the gap: From predictions to ML-informed decisions

April 13: Kristen Grauman- Sights, sounds, and space: Audio-visual learning in 3D environments

April 20: Su-In Lee: Explainable Artificial Intelligence for Biology and Health

April 27: Sharon Yixuan Li: Towards Reliable Open-world Machine Learning