The Bit Complexity of Efficient Continuous Optimization

Type: Article

Publication Date: 2023-11-06

Citations: 3

DOI: https://doi.org/10.1109/focs57990.2023.00125

Download PDF

Abstract

We analyze the bit complexity of efficient algorithms for fundamental optimization problems, such as linear regression, p-norm regression, and linear programming (LP). State-of-the-art algorithms are iterative, and in terms of the number of arithmetic operations, they match the current time complexity of multiplying two n-by-n matrices (up to polylogarithmic factors). However, previous work has typically assumed infinite precision arithmetic, and due to complicated inverse maintenance techniques, the actual running times of these algorithms are unknown. To settle the running time and bit complexity of these algorithms, we demonstrate that a core common subroutine, known as inverse maintenance, is backward-stable. Additionally, we show that iterative approaches for solving constrained weighted regression problems can be accomplished with bounded-error preconditioners. Specifically, we prove that linear programs can be solved approximately in matrix multiplication time multiplied by polylog factors that depend on the condition number $\kappa$ of the matrix and the inner and outer radius of the LP problem. p-norm regression can be solved approximately in matrix multiplication time multiplied by polylog factors in $\kappa$. Lastly, linear regression can be solved approximately in input-sparsity time multiplied by polylog factors in $\kappa$. Furthermore, we present results for achieving lower than matrix multiplication time for p-norm regression by utilizing faster solvers for sparse linear systems.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ The Bit Complexity of Efficient Continuous Optimization 2023 Mehrdad Ghadiri
Richard Peng
Santosh Vempala
+ Improved Iteration Complexities for Overconstrained $p$-Norm Regression 2021 Arun Jambulapati
Yang P. Liu
Aaron Sidford
+ PDF Chat Fast Algorithms for \(\ell_p\)-Regression 2024 Deeksha Adil
Rasmus Kyng
Richard Peng
Sushant Sachdeva
+ Achieving optimal complexity guarantees for a class of bilevel convex optimization problems 2023 Sepideh Samadi
Daniel Burbano
Farzad Yousefian
+ On the Convergence of Inexact Predictor-Corrector Methods for Linear Programming 2022 Gregory Dexter
Agniva Chowdhury
Haim Avron
Petros Drineas
+ Fast Algorithms for $\ell_p$-Regression 2022 Deeksha Adil
Rasmus Kyng
Richard Peng
Sushant Sachdeva
+ PDF Chat Acceleration Meets Inverse Maintenance: Faster $\ell_{\infty}$-Regression 2024 Deeksha Adil
Shunhua Jiang
Rasmus Kyng
+ Faster $p$-Norm Regression Using Sparsity 2021 Mehrdad Ghadiri
Richard Peng
Santosh Vempala
+ Faster $p$-Norm Regression Using Sparsity. 2021 Mehrdad Ghadiri
Richard Peng
Santosh Vempala
+ PDF Chat Exact Matrix Factorization Updates for Nonlinear Programming 2023 Adolfo R. Escobedo
+ Computational Methods for Linear-Quadratic Optimization 1999 Peter Benner
+ Bi-sparse optimization-based least squares regression 2019 Zhiwang Zhang
Jing He
Guangxia Gao
Yingjie Tian
+ Computational Methods For Linear Programming 1994 David F. Shanno
+ Sparse Optimization 2017 Zhu Han
Mingyi Hong
Dan Wang
+ PDF Chat Large-Scale Optimization with Linear Equality Constraints Using Reduced Compact Representation 2022 Johannes J. Brust
Roummel F. Marcia
Cosmin G. Petra
Michael A. Saunders
+ An overview of sparse convex optimization 2018 Jacob Mantjitji Modiba
+ Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization 2020 Hussein Hazimeh
Rahul Mazumder
Ali Saab
+ Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization 2020 Hussein Hazimeh
Rahul Mazumder
Ali Saab
+ PDF Chat Sparse regression at scale: branch-and-bound rooted in first-order optimization 2021 Hussein Hazimeh
Rahul Mazumder
Ali Saab
+ Reduced complexity methods for large-scale convex optimization 2012 Anatoli Juditsky
Arkadi Nemirovski