The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods

Type: Article

Publication Date: 2019-01-01

Citations: 45

DOI: https://doi.org/10.1137/18m1172314

View Chat PDF

Abstract

We present a general technique for the analysis of first-order methods. The technique relies on the construction of a duality gap for an appropriate approximation of the objective function, where the function approximation improves as the algorithm converges. We show that in continuous time the enforcement of an invariant, which corresponds to the approximate duality gap decreasing at a certain rate, exactly recovers a wide range of first-order continuous-time methods. We characterize the discretization errors incurred by different discretization methods, and show how iteration-complexity-optimal methods for various classes of problems cancel out the discretization error. The techniques are illustrated on various classes of problems---including convex minimization for Lipschitz-continuous objectives, smooth convex minimization, composite minimization, smooth and strongly convex minimization, solving variational inequalities with monotone operators, and convex-concave saddle-point optimization---and naturally extend to other settings.

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View
  • SIAM Journal on Optimization - View

Similar Works

Action Title Year Authors
+ An optimal first-order primal-dual gap reduction framework for constrained convex optimization 2015 Quoc Tran Dinh
Volkan Cevher
+ PDF Chat Guaranteed upper bounds for iteration errors and modified Kacanov schemes via discrete duality 2025 Lars Diening
Johannes Storn
+ Inexact First-Order Primal-Dual Algorithms 2018 Julian Rasch
Antonin Chambolle
+ Inexact First-Order Primal-Dual Algorithms 2018 Julian Rasch
Antonin Chambolle
+ PDF Chat Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems 2019 Yuyuan Ouyang
Yangyang Xu
+ Splitting the Smoothed Primal-Dual Gap: Optimal Alternating Direction Methods 2015 Quoc Tran Dinh
Volkan Cevher
+ PDF Chat Differential estimates for fast first-order multilevel nonconvex optimisation 2024 Neil D. Dizon
Tuomo Valkonen
+ Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments 2023 Quoc Tran-Dinh
+ Non-ergodic convergence rates of first-order primal-dual algorithms for saddle point problems 2023 Xin He
Nan‐jing Huang
Ya-Ping Fang
+ Unified linear convergence of first-order primal-dual algorithms for saddle point problems 2022 Fan Jiang
Zhongming Wu
Xingju Cai
Hongchao Zhang
+ PDF Chat Discrete Weak Duality of Hybrid High-Order Methods for Convex Minimization Problems 2024 Ngoc Tien Tran
+ Inexact Model: A Framework for Optimization and Variational Inequalities 2019 Fedor Stonyakin
Alexander Gasnikov
Alexander Tyurin
Dmitry Pasechnyuk
Artem Agafonov
Pavel Dvurechensky
Darina Dvinskikh
Alexey Kroshnin
Victorya Piskunova
+ Inexact Model: A Framework for Optimization and Variational Inequalities 2019 Fedor Stonyakin
Alexander Gasnikov
Alexander Tyurin
Dmitry Pasechnyuk
Artem Agafonov
Pavel Dvurechensky
Darina Dvinskikh
Alexey Kroshnin
Victorya Piskunova
+ High-Order Reduced-Gradient Methods for Composite Variational Inequalities 2023 Yurii Nesterov
+ A Unified Convergence Rate Analysis of The Accelerated Smoothed Gap Reduction Algorithm 2021 Quoc Tran-Dinh
+ A Unified Convergence Rate Analysis of The Accelerated Smoothed Gap Reduction Algorithm 2021 Quoc Tran-Dinh
+ Constrained convex minimization via model-based excessive gap 2014 Quoc Tran-Dinh
Volkan Cevher
+ A Primal-Dual Algorithmic Framework for Constrained Convex Minimization 2014 Quoc Tran-Dinh
Volkan Cevher
+ A Primal-Dual Algorithmic Framework for Constrained Convex Minimization 2014 Quoc Tran Dinh
Volkan Cevher
+ A first-order inexact primal-dual algorithm for a class of convex-concave saddle point problems 2021 Fan Jiang
Zhongming Wu
Xingju Cai
Hongchao Zhang

Cited by (44)

Action Title Year Authors
+ PDF Chat Understanding Nesterov's Acceleration via Proximal Point Method 2022 Kwangjun Ahn
Suvrit Sra
+ PDF Chat A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip 2021 Mathieu Even
Raphaël Berthier
Francis Bach
Nicolas Flammarion
Pierre Gaillard
Hadrien Hendrikx
Laurent Massoulié
Adrien Taylor
+ PDF Chat Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences 2021 Majid Jahani
Naga V. C. Gudapati
Chenxin Ma
Rachael Tappenden
Martin Takáč
+ PDF Chat Distributed Mirror Descent Algorithm With Bregman Damping for Nonsmooth Constrained Optimization 2023 Guanpu Chen
Gehui Xu
Weijian Li
Yiguang Hong
+ Breaking the $O(1/Δ)$ Optimal Rate for a Class of Minimax Problems 2020 Chaobing Song
Yong Jiang
Yi Ma
+ Generalized Momentum-Based Methods: A Hamiltonian Perspective 2019 Jelena Diakonikolas
Michael I. Jordan
+ A Dynamical Systems Perspective on Nesterov Acceleration 2019 Michael Muehlebach
Michael I. Jordan
+ Potential Function-based Framework for Making the Gradients Small in Convex and Min-Max Optimization 2021 Jelena Diakonikolas
Puqian Wang
+ A unified differential equation solver approach for separable convex optimization: splitting, acceleration and nonergodic rate 2021 Hao Luo
Zihang Zhang
+ PDF Chat Generalized Momentum-Based Methods: A Hamiltonian Perspective 2021 Jelena Diakonikolas
Michael I. Jordan
+ Perturbed Fenchel duality and first-order methods 2018 David H. Gutman
Javier Peña
+ Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization 2020 Chaobing Song
Yong Jiang
Yi Ma
+ Convergence Analysis of Accelerated Stochastic Gradient Descent under the Growth Condition 2020 You-Lin Chen
Sen Na
Mladen Kolar
+ Direct Runge-Kutta Discretization Achieves Acceleration 2018 Jingzhao Zhang
Aryan Mokhtari
Suvrit Sra
Ali Jadbabaie
+ PDF Chat From differential equation solvers to accelerated first-order methods for convex optimization 2021 Hao Luo
Long Chen
+ Monotone Inclusions, Acceleration and Closed-Loop Control 2021 Tianyi Lin
Michael I. Jordan
+ Towards Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity. 2019 Chaobing Song
Yi Ma
+ A Control-Theoretic Perspective on Optimal High-Order Optimization 2019 Tianyi Lin
Michael I. Jordan
+ A Direct $\tilde{O}(1/\epsilon)$ Iteration Parallel Algorithm for Optimal Transport 2019 Arun Jambulapati
Aaron Sidford
Kevin Tian
+ Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences 2017 Majid Jahani
Naga V. C. Gudapati
Chenxin Ma
Rachael Tappenden
Martin Takáč
+ Acceleration Methods 2021 Alexandre d’Aspremont
Damien Scieur
Adrien Taylor
+ From differential equation solvers to accelerated first-order methods for convex optimization 2019 Hao Luo
Long Chen
+ Locally Accelerated Conditional Gradients 2019 Jelena Diakonikolas
Alejandro Carderera
Sebastian Pokutta
+ Understanding the Acceleration Phenomenon via High-Resolution Differential Equations 2018 Bin Shi
Simon S. Du
Michael I. Jordan
Weijie Su
+ From Nesterov's Estimate Sequence to Riemannian Acceleration 2020 Kwangjun Ahn
Suvrit Sra
+ From Proximal Point Method to Nesterov's Acceleration. 2020 Kwangjun Ahn
+ Global Riemannian Acceleration in Hyperbolic and Spherical Spaces 2020 David Martı́nez-Rubio
+ PDF Chat A control-theoretic perspective on optimal high-order optimization 2021 Tianyi Lin
Michael I. Jordan
+ Conjugate Gradients and Accelerated Methods Unified: The Approximate Duality Gap View 2019 Jelena Diakonikolas
Lorenzo Orecchia
+ Heavy Ball Momentum for Conditional Gradient 2021 Bingcong Li
Alireza Sadeghi
Georgios B. Giannakis
+ Unified Acceleration of High-Order Algorithms under General Hölder Continuity 2021 Chaobing Song
Yong Jiang
Yi Ma
+ Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives 2020 Michael Muehlebach
Michael I. Jordan
+ Federated Composite Optimization 2020 Honglin Yuan
Manzil Zaheer
Sashank J. Reddi
+ No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization 2021 Jun-Kun Wang
Jacob Abernethy
Kfir Y. Levy
+ Acceleration via Symplectic Discretization of High-Resolution Differential Equations 2019 Bin Shi
Simon S. Du
Weijie Su
Michael I. Jordan
+ Robust Reinforcement Learning via Adversarial training with Langevin Dynamics 2020 Parameswaran Kamalaruban
Yu‐Ting Huang
Ya-Ping Hsieh
Paul Rolland
Cheng Shi
Volkan Cevher
+ Alternating Randomized Block Coordinate Descent 2018 Jelena Diakonikolas
Lorenzo Orecchia
+ PDF Chat Continuous-Time Convergence Rates in Potential and Monotone Games 2022 Bolin Gao
Lacra Pavel
+ PDF Chat Optimal anytime regret for two experts 2020 Nicholas J. A. Harvey
Christopher Liaw
Edwin Perkins
Sikander Randhawa
+ PDF Chat Discrete processes and their continuous limits 2020 Uri M. Ascher

Citing (39)

Action Title Year Authors
+ Composite objective mirror descent 2010 John C. Duchi
Shai Shalev‐Shwartz
Yoram Singer
Ambuj Tewari
+ The extragradient method for finding saddle points and other problems 1976 G. M. Korpelevich
+ Lectures on Modern Convex Optimization 2001 Aharon Ben‐Tal
Arkadi Nemirovski
+ PDF Chat Optimization for Machine Learning 2011 Suvrit Sra
Sebastian Nowozin
Stephen J. Wright
+ Convex Analysis and Optimization 2003 Dimitri P. Bertsekas
Angelia Nedić
Asuman Ozdaglar
+ PDF Chat Universal gradient methods for convex optimization problems 2014 Yu. Nesterov
+ A new approach to computing maximum flows using electrical flows 2013 Yin Tat Lee
Satish Rao
Nikhil Srivastava
+ PDF Chat Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems 2004 Daniel A. Spielman
Shang‐Hua Teng
+ Convex Analysis and Monotone Operator Theory in Hilbert Spaces 2017 Heinz H. Bauschke
Patrick L. Combettes
+ PDF Chat Prox-Method with Rate of Convergence <i>O</i>(1/<i>t</i>) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems 2004 Arkadi Nemirovski
+ PDF Chat Nearly Maximum Flows in Nearly Linear Time 2013 Jonah Sherman
+ Complexity bounds for primal-dual methods minimizing the model of objective function 2017 Yu. Nesterov
+ A geometric alternative to Nesterov's accelerated gradient descent 2015 SĂ©bastien Bubeck
Yin Tat Lee
Mohit Singh
+ PDF Chat Smooth minimization of non-smooth functions 2004 Yu. Nesterov
+ Primal-dual subgradient methods for convex problems 2007 Yurii Nesterov
+ A differential equation for modeling Nesterov's accelerated gradient method: theory and insights 2016 Weijie Su
Stephen Boyd
Emmanuel J. CandĂšs
+ Potential-Function Proofs for First-Order Methods 2017 Nikhil Bansal
Anupam Gupta
+ Lectures on Convex Optimization 2018 Yurii Nesterov
+ A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights 2015 Weijie Su
Stephen Boyd
Emmanuel J. CandĂšs
+ PDF Chat A variational perspective on accelerated methods in optimization 2016 Andre Wibisono
Ashia C. Wilson
Michael I. Jordan
+ PDF Chat An Optimal First Order Method Based on Optimal Quadratic Averaging 2018 Dmitriy Drusvyatskiy
Maryam Fazel
Scott Roy
+ Smooth minimization of non-smooth functions 2005 Yu. Nesterov
+ Universal gradient methods for convex optimization problems 2015 Yurii Nesterov
+ Convex Analysis and Monotone Operator Theory in Hilbert Spaces 2011 Heinz H. Bauschke
Patrick L. Combettes
+ PDF Chat A simple, combinatorial algorithm for solving SDD systems in nearly-linear time 2013 Jonathan A. Kelner
Lorenzo Orecchia
Aaron Sidford
Zeyuan Allen Zhu
+ Theory of Convex Optimization for Machine Learning. 2014 SĂ©bastien Bubeck
+ Convex Optimization: Algorithms and Complexity 2014 SĂ©bastien Bubeck
+ PDF Chat An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations 2013 Jonathan A. Kelner
Yin Tat Lee
Lorenzo Orecchia
Aaron Sidford
+ A Lyapunov Analysis of Momentum Methods in Optimization 2016 Ashia C. Wilson
Benjamin Recht
Michael I. Jordan
+ Integration Methods and Accelerated Optimization Algorithms 2017 Damien Scieur
Vincent Roulet
Francis Bach
Alexandre d’Aspremont
+ Solving Packing and Covering LPs in $\tilde{O}(\frac{1}{\epsilon^2})$ Distributed Iterations with a Single Algorithm and Simpler Analysis 2017 Jelena Diakonikolas
Lorenzo Orecchia
+ Alternating Randomized Block Coordinate Descent 2018 Jelena Diakonikolas
Lorenzo Orecchia
+ On Acceleration with Noise-Corrupted Gradients 2018 Michael B. Cohen
Jelena Diakonikolas
Lorenzo Orecchia
+ Width-Independence Beyond Linear Objectives: Distributed Fair Packing and Covering Algorithms. 2018 Jelena Diakonikolas
Maryam Fazel
Lorenzo Orecchia
+ PDF Chat Constrained Submodular Maximization: Beyond 1/e 2016 Alina Ene
Huy L. Nguyễn
+ A geometric alternative to Nesterov's accelerated gradient descent 2015 SĂ©bastien Bubeck
Yin Tat Lee
Mohit Singh
+ Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method 2017 Jelena Diakonikolas
Lorenzo Orecchia
+ Solving Packing and Covering LPs in $\tilde{O}(\frac{1}{Δ^2})$ Distributed Iterations with a Single Algorithm and Simpler Analysis 2017 Jelena Diakonikolas
Lorenzo Orecchia
+ A Universal Catalyst for First-Order Optimization 2015 Hongzhou Lin
Julien Mairal
ZaĂŻd Harchaoui