SIAM MDS20 MS13 Advances in Subspace Learning and Clustering Mini-Symposium
Abstract: This talk is about the fundamental problem of learning a complete (orthogonal) dictionary from samples of sparsely generated signals. Most existing methods solve the dictionary (and sparse representations) based on heuristic algorithms, usually without theoretical guarantees for either optimality or complexity. The recent L1-minimization based methods do provide such guarantees but the associated algorithms recover the dictionary one column at a time. We propose a new formulation that maximizes the L4-norm over the orthogonal group, to learn the entire dictionary. We prove that under a random Bernoulli Gaussian data model, with nearly minimum sample complexity, the global optima of the L4-norm are very close to signed permutations of the ground truth. Inspired by this observation, we give a conceptually simple and yet effective algorithm based on “matching, stretching, and projection’ (MSP). The algorithm provably converges locally at a superlinear (cubic) rate and cost per iteration is merely an SVD. In addition to strong theoretical guarantees, experiments show that the new algorithm is significantly more efficient and effective than existing methods, including KSVD and L1-based methods. Preliminary experimental results on real images clearly demonstrate advantages of so learned dictionary over classic PCA bases.
- Yi Ma, University of California, Berkeley, U.S., email@example.com
Join Zoom Meeting
Meeting ID: 982 2294 6324