MINDS researchers present findings at NeurIPS 2023

/ December 14, 2023/ Uncategorized

The annual conference is a leading global event for interdisciplinary researchers, emphasizing machine learning and fostering dynamic idea exchange and innovation.

Faculty and students from Johns Hopkins Mathematical Institute for Data Science present their recent findings at the 37th Conference on Neural Information Processing Systems held December 10 through 16 in New Orleans. NeurIPS is an annual conference for machine learning and neuroscience, facilitating collaboration among researchers and professionals to share cutting-edge research and ideas in areas like deep learning and computer vision.

JHU NeurlPS 2023 presenters include:

Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational” by Sayantan Choudhury, Eduard Gorbunov, and Nicolas Loizou

Stochastic past extragradient (SPEG) has gained a lot of interest in recent years and is one of the most efficient algorithms for solving large-scale min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks. However, the current analysis of SPEG depends on a strong assumption like bounded variance. The team’s work relaxes the bounded variance assumption and analyses the SPEG for solving structured non-monotone variational inequality problems under an arbitrary sampling paradigm

Approximately equivariant graph networks” by Teresa Huang, Ron Levie, and Soledad Villar

Graph Neural Networks (GNNs) typically exploit permutation symmetry. Yet for learning tasks on a fixed graph, the team show that enforcing active or approximate symmetries improves generalization. They theoretically quantify the bias-variance tradeoff in choosing different symmetry groups, and empirically demonstrate the generalization gain in numerous experiments, spanning image inpainting, traffic flow prediction, and human pose estimation.

Fine-grained Expressivity of GNNs” by Jan Böker, Ron Levie, Teresa Huang, Soledad Villar, and Christopher Morris

The expressivity of GNNs has been studied extensively through their ability to solve the graph isomorphism problem and comparison with the Weisfeiler-Leman test (WL). However, the graph isomorphism objective does not give insights into the degree of similarity between two graphs. The team resolved this limitation by considering continuous extensions of GNNs and WL based on graphons. They quantified the graph distance induced by GNNs, leading to a more fine-grained understanding of the expressivity of GNNs. The team validated their theoretical findings by showing that randomly initialized GNNs, without training, exhibit competitive performance compared to their trained counterparts with considerably faster runtime.

Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors” by Beepul Bharti, Paul Yi, and Jeremias Sulam

In high-stakes decision settings, fairness is crucial. Addressing fairness violations in the absence of sensitive attributes, the team’s work establishes tight upper bounds for equalized odds violation and introduces a post-processing correction method. This method provides provable control over worst-case equalized odds violations, in the absence of sensitive attributes, with results applicable to various datasets.

Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness” by Ambar Pal, Jeremias Sulam, and René Vidal

Challenging the notion that adversarial examples are unavoidable, the team’s work explores the role of the structure in data for adversarial robustness. They theoretically demonstrate that properties of the data distribution influence the existence of robust classifiers. By utilizing “concentration” structure in data, their approach provides improved robustness guarantees, advancing the frontier of classification with mathematical guarantees of robustness.

Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization” by Mahyar Fazlyab, Taha Entesari, Aniket Roy, and Rama Chellappa

This paper introduces a novel method to enhance the robustness of deep classifiers against adversarial perturbations. Unlike existing approaches, it utilizes a differentiable regularizer as a lower bound on the distance between data points and the classification boundary. The method requires knowledge of the model’s Lipschitz constant, calculated using a scalable and efficient technique. This enables more direct manipulation of decision boundaries, preventing excessive regularization. Experimental results on MNIST, CIFAR-10, and Tiny-ImageNet datasets demonstrate competitive improvements over state-of-the-art methods.

Share this Post