“Nonconvex Optimization for Sparse Deconvolution: Geometry, Algorithms, and Applications”
Abstract: Deconvolution of sparse point sources from its convolution with a unknown point spread function (PSF) finds many applications in neuroscience, microscopy imaging, physics, and computer vision. The problem is challenging to solve — it exhibits intrinsic shift symmetry structures that its natural formulation is nonconvex. There is very little theoretical analysis showing under what conditions nonconvex optimization methods are guaranteed to work, or may fail.
In this talk, we develop global optimization theory for sparse blind deconvolution, via analyzing its nonconvex optimization landscape. First, we show how to use geometric intuitions to build efficient nonconvex algorithms, linearly converge to target solutions even with random initializations. Moreover, we extended our geometric understandings to sparse deconvolution with multiple PSFs (a.k.a convolutional dictionary learning), where each measurement is a superposition of convolution of multiple unknown PSFs. Based on its similarity to overcomplete dictionary learning, we provide the first global algorithmic guarantees for convolutional dictionary learning. Finally, we show how to use these intuitions to design fast practical methods, demonstrating on several applications in neuroscience and microscopy imaging.
Bio: Qing Qu is a Moore-Sloan data science fellow at Center for Data Science, New York University. He received his Ph.D from Columbia University in Electrical Engineering in Oct. 2018. He received his B.Eng. from Tsinghua University in Jul. 2011, and a M.Sc.from the Johns Hopkins University in Dec. 2012, both in Electrical and Computer Engineering. He interned at U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016, respectively. His research interest lies at the intersection of foundation of data science, machine learning, numerical optimization, and signal/image processing, with focus on developing efficient nonconvex methods and global optimality guarantees for solving representation learning and nonlinear inverse problems in engineering and imaging sciences. He is the recipient of Best Student Paper Award at SPARS’15 (with Ju Sun, John Wright), and the recipient of Microsoft PhD Fellowship in machine learning.