Ask a Question

Prefer a chat interface with context about you and your work?

Easily Parallelizable and Distributable Class of Algorithms for Structured Sparsity, with Optimal Acceleration

Easily Parallelizable and Distributable Class of Algorithms for Structured Sparsity, with Optimal Acceleration

Many statistical learning problems can be posed as minimization of a sum of two convex functions, one typically a composition of non-smooth and linear functions. Examples include regression under structured sparsity assumptions. Popular algorithms for solving such problems, e.g., ADMM, often involve non-trivial optimization subproblems or smoothing approximation. We consider …