Zhishen Huang, “Nonconvex optimization, the integer-constrained optimization problem, & applying sketching technique to compute autocorrelation with data snapshot”

/ May 18, 2022/

When:
May 16, 2022 @ 11:00 am – 12:00 pm
2022-05-16T11:00:00-04:00
2022-05-16T12:00:00-04:00

Zhishen Huang

Michigan State University

Title: “Nonconvex optimization, the integer-constrained optimization problem, & applying sketching technique to compute autocorrelation with data snapshot”

Abstract: This talk intends to cover three projects Zhishen has done in his research: one in nonconvex optimization, one in the application of the integer-constrained optimization problem in the setting of medical imaging, and another one in applying sketching technique to compute autocorrelation with data snapshot.

On the application of sketching: sketching is a data compression technique that reduces the dimension of data points while preserving geometric properties of a dataset such as pairwise distance, spectral norm, and SVD decomposition formulations. This work studies applying the sketching technique to compute autocorrelation, a second-order statistic used in computational physics. We show that we can sketch the streaming data points generated in the simulation process, and with only the snapshots taken for the streaming data, we can compute autocorrelation with respect to the whole dataset up to arbitrary accuracy. We validate this claim on a molecular dynamic simulation dataset.

On the self-supervised approach for sampling problem: image reconstruction problems (for example in the setting of medical imaging such as limited-view MRI) are often formulated as penalised least-square problems with regularization/prior terms imposing constraint on sparsity of the ground signal in some domain. It is always ideal to reduce the amount of time spent in sampling while retaining high quality of imaging in practices. In MRI, the measurement takes place in the Fourier domain, thus the limited-view imaging reconstruction problem is naturally formulated as a sparse sampling problem. We desire a mechanism that can generate adaptive sampling patterns with respect to different input/individuals. The classic greedy algorithm is not feasible due to its low efficiency. Instead, we leverage the function approximator,  the neural networks, as a sampling pattern predictor. To train such a predictor, we propose a self-supervision pipeline to generate internal training labels with targeted sampling ratio.

On the nonconvex optimization problem: the existence of saddle points have been a main challenge in the optimizing nonconvex objectives. While saddle points may not constitute useful solutions to real applications, local minimizers can often satisfy the practical needs. First-order algorithms have demonstrated potential to escape saddle points for smooth objective functions with properly tuned perturbation injected during the iteration. This work shows that with random walk terms included in the variance-reduced gradient descent algorithms, which is referred as stochastic gradient Langevin dynamics, the first order algorithm has probabilistic guarantee of escaping saddle point solutions and of traversing through potential global optimizers.

The talk is expected to be 40-45 minutes. The speaker intends to state formulation of each problem and presents the main results on a high level. If time permits, Zhishen can comment on main ideas in constructing proofs and experiment details.

Share this Post