Lalitha Sankar

/ January 22, 2021/

When:
March 2, 2021 @ 12:00 pm – 1:00 pm
2021-03-02T12:00:00-05:00
2021-03-02T13:00:00-05:00

Title: Alpha-loss: A Tunable Class of Loss Functions for Robust Learning

Abstract:  In this talk, we introduce alpha-loss as a parameterized class of loss functions that resulted from operationally motivating information-theoretic measures. Tuning the parameter alpha from 0 to infinity yields a class of loss functions that admit continuous interpolation between log-loss (alpha=1), exponential loss (alpha=1/2), and 0-1 loss (alpha=infinity). We discuss how different regimes of the parameter alpha enables the practitioner to tune the sensitivity of their algorithm towards two emerging challenges in learning: robustness and fairness. We discuss classification properties of the class, information-theoretic interpretations, and the optimization landscape of the average loss as viewed through the lens of Strict-Local-Quasi-Convexity under the logistic regression model. Finally, we comment on ongoing and future work on different applications of alpha-loss including deep neural networks, federated learning, and boosting.

Bio:  Lalitha Sankar is an Associate Professor in the School of Electrical, Computer, and Energy Engineering at Arizona State University. She received her doctorate from  Rutgers University, her masters from the University of Maryland and her bachelor’s degree from the Indian Institute of Technology, Bombay. Her research is at the intersection of information theory and learning theory and also its applications to the electric grid. Her work has dominantly focused on identifying meaningful metrics for information privacy and algorithmic fairness; the talk today is a result of her broader privacy work. She received the NSF CAREER award in 2014 and currently leads an NSF-and Google-funded effort on using learning techniques to assess COVID-19 exposure risk in a secure and privacy-preserving manner.

Share this Post