Data Science Seminar: Sparse Stochastic Optimization: From Dual Averaging to Variance Reduction11 Jun 2021, by Sponsored events in
8 April 2021 at 11:00 (Online)
By: Lin Xiao, Research Scientist, Facebook AI Research (FAIR), in Seattle, Washington, USA
In many optimization problems arising from signal processing and machine learning, sparsity is often a desired property, providing simplicity and robustness of the solution. An effective way to obtain sparse solutions is to add a sparsity-inducing regularization to the objective, and then solve the regularized problem using a proximal-gradient method. However, direct extensions of this approach to the stochastic optimization setting become ineffective in obtaining sparse solutions, due to the diminishing step sizes required for stochastic gradient type of methods to converge. In this talk, we explain how the regularized dual-averaging method can overcome such difficulty and enable large-scale sparse stochastic optimization. In addition, we present recent progress in stochastic gradient methods with variance reduction, which hold promise for sparse optimization in both convex and non-convex settings.
In cooperation with the Jean Golding Institute, University of Bristol
More information: the Bristol Data Science Seminar Series