Branch-and-Bound versus Lift-and-Project Relaxations in Combinatorial Optimization

Abstract

Abstract In this note, we consider a theoretical framework for comparing branch-and-bound with classical lift-and-project hierarchies. We simplify our analysis by streamlining the definition of branch-and-bound. We introduce “skewed k -trees” which give a hierarchy of relaxations that is incomparable to that of Sherali-Adams, and we show that it is much better for some instances. We also give an example where lift-and-project does very well and branch-and-bound does not. Finally, we study the set of branch-and-bound trees of height at most k and “squeeze” their effectiveness between two well-known lift-and-project procedures.

Locations

  • Mathematical Programming
  • arXiv (Cornell University)

Ask a Question About This Paper

Summary

This paper presents a fundamental theoretical comparison between Branch-and-Bound (BB) and various Lift-and-Project (L&P) hierarchies in the context of combinatorial optimization, aiming to clarify their relative strengths and weaknesses. The significance lies in offering a deeper understanding of when each strategy is more effective for deriving strong polyhedral relaxations for integer programs, informing both algorithm design and theoretical analysis.

The central problem addressed is finding a relaxation Q such that the integer hull P_I (the convex hull of feasible integer solutions) is contained within Q, which is itself contained within the continuous relaxation P, with Q being as “close” as possible to P_I. This often involves constructing extended formulations, which represent Q in a higher-dimensional space.

The paper’s key innovations and their implications are:

  1. Streamlined Branch-and-Bound (BB) Definition: A simplified mathematical framework for analyzing BB relaxations is introduced. The authors define T(P) as the convex hull of the union of “atoms” (feasible regions at the leaves of a BB tree T). This definition allows for rigorous comparison by abstracting away algorithmic details like LP solution values or pruning. This builds on prior work by Balas, who showed that BB trees can implicitly define extended formulations.

  2. Introduction of “Skewed k-trees”: This is a novel class of branch-and-bound enumeration trees specifically designed to highlight BB’s strengths. The paper demonstrates that for problems like the uniform knapsack with small capacity and maximum clique on sparse graphs (characterized by degeneracy), these specialized BB trees achieve the integer hull P_I in polynomial size. This is a significant finding because, for these same instances, the well-established Sherali-Adams (SA) hierarchy is proven to require exponentially sized extended formulations to achieve the same relaxation. This showcases a clear structural advantage of BB for certain problem types.

  3. BB’s Advantage for “No-Good Constraints”: The paper presents another class of problems defined by “no-good” constraints, where a polynomial-sized BB tree is sufficient to find the integer hull. Crucially, it shows that for these problems, both the Sherali-Adams (SA) hierarchy and even the canonical Lift-and-Project (L^k(P)) fail to capture P_I with polynomial-sized formulations. This further emphasizes that BB can outperform L&P for specific problem structures.

  4. Demonstrating BB’s Limitations and Incomparability: To provide a balanced view, the paper constructs a specific polytope P where the canonical Lift-and-Project procedure L^k(P) excels. Specifically, it shows an instance where L^2(P) (two rounds of L&P) suffices to obtain P_I, but any branch-and-bound tree below a certain exponential size fails to separate P_I from P. This crucial counter-example proves that neither BB nor L&P is universally superior, establishing a true “incomparability” between the two approaches.

  5. The “Squeezing” Theorem for Bounded Height BB: This is a major theoretical contribution. The paper defines T^k(P) as the strongest relaxation achievable by any BB tree of height at most k. It then rigorously proves that this operator is “squeezed” between two well-known lift-and-project procedures: L^k(P) ⊆ T^k(P) ⊆ B^k(P).

    • L^k(P) is the canonical Lift-and-Project hierarchy, which iterates k times.
    • B^k(P) is the sequential convexification procedure from Balas, Ceria, and Cornuéjols (BCC), which computes the convex hull of solutions where k variables are fixed to integer values.
      This theorem precisely positions the power of bounded-height branch-and-bound within the established landscape of polyhedral cutting plane and lift-and-project hierarchies. It implies that while BB’s behavior can be complex and instance-dependent, its strength for bounded depths is bounded by powerful L&P techniques.

The main prior ingredients upon which this work builds include:
* Balas’s work on disjunctive programming and extended formulations [Bal85]: This forms the foundational understanding that branch-and-bound can be viewed as implicitly generating extended formulations.
* The Sherali-Adams (SA) hierarchy [SA90]: A prominent lift-and-project hierarchy used as a key benchmark for comparison, particularly in the “skewed k-trees” and “no-good constraints” results where BB demonstrates superiority.
* The canonical Lift-and-Project (L&P) hierarchies [LS91, Lau03]: Specifically, the iterative application of the L operator, which serves as one of the bounding operators in the “squeezing” theorem and the basis for the counter-example demonstrating BB’s limitations.
* The Balas-Ceria-Cornuéjols (BCC) sequential convexification [BCC93]: This procedure defines the B^k(P) operator, which provides the upper bound in the “squeezing” theorem, thus completing the precise theoretical placement of bounded-height BB.
* General concepts from Integer Programming and Combinatorial Optimization: Including polyhedral relaxations, integer hulls, and extended formulations, as outlined in textbooks by Schrijver [Sch98, Sch03] and Wolsey & Nemhauser [WN99].

In summary, the paper offers a novel theoretical framework to compare Branch-and-Bound with Lift-and-Project relaxations. By introducing “skewed k-trees” and specific counter-examples, it fundamentally reshapes our understanding of their relative strengths and weaknesses, demonstrating that BB can surprisingly outperform L&P hierarchies for certain problem structures, while also clearly establishing its limitations. The “squeezing” theorem then provides a unifying theoretical result, precisely positioning the power of bounded-height BB within the existing landscape of advanced polyhedral relaxation techniques.

In this paper, we consider a theoretical framework for comparing branch-and-bound with classical lift-and-project hierarchies. We simplify our analysis of streamlining the definition of branch-and-bound. We introduce "skewed $k$-trees" which … In this paper, we consider a theoretical framework for comparing branch-and-bound with classical lift-and-project hierarchies. We simplify our analysis of streamlining the definition of branch-and-bound. We introduce "skewed $k$-trees" which give a hierarchy of relaxations that is incomparable to that of Sherali-Adams, and we show that it is much better for some instances. We also give an example where lift-and-project does very well and branch-and-bound does not. Finally, we study the set of branch-and-bound trees of height at most $k$ and effectively "squeeze" their effectiveness between two well-known lift-and-project procedures.
Recently, Cygan, Kowalik, and Wykurz [IPL 2009] gave sub-exponential-time approximation algorithms for the Set-Cover problem with approximation ratios better than ln(n). In light of this result, it is natural to … Recently, Cygan, Kowalik, and Wykurz [IPL 2009] gave sub-exponential-time approximation algorithms for the Set-Cover problem with approximation ratios better than ln(n). In light of this result, it is natural to ask whether such improvements can be achieved using lift-and-project methods. We present a simpler combinatorial algorithm which has nearly the same time-approximation tradeoff as the algorithm of Cygan et al., and which lends itself naturally to a lift-and-project based approach. At a high level, our approach is similar to the recent work of Karlin, Mathieu, and Nguyen [IPCO 2011], who examined a known PTAS for Knapsack (similar to our combinatorial Set-Cover algorithm) and its connection to hierarchies of LP and SDP relaxations for Knapsack. For Set-Cover, we show that, indeed, using the trick of "lifting the objective function", we can match the performance of our combinatorial algorithm using the LP hierarchy of Lovasz and Schrijver. We also show that this trick is essential: even in the stronger LP hierarchy of Sherali and Adams, the integrality gap remains at least (1-eps) ln(n) at level Omega(n) (when the objective function is not lifted). As shown by Aleknovich, Arora, and Tourlakis [STOC 2005], Set-Cover relaxations stemming from SDP hierarchies (specifically, LS+) have similarly large integrality gaps. This stands in contrast to Knapsack, where Karlin et al. showed that the (much stronger) Lasserre SDP hierarchy reduces the integrality gap to (1+eps) at level O(1). For completeness, we show that LS+ also reduces the integrality gap for Knapsack to (1+eps). This result may be of independent interest, as our LS+ based rounding and analysis are rather different from those of Karlin et al., and to the best of our knowledge this is the first explicit demonstration of such a reduction in the integrality gap of LS+ relaxations after few rounds.
Branch and cut is the dominant paradigm for solving a wide range of mathematical programming problems -- linear or nonlinear -- combining efficient search (via branch and bound) and relaxation-tightening … Branch and cut is the dominant paradigm for solving a wide range of mathematical programming problems -- linear or nonlinear -- combining efficient search (via branch and bound) and relaxation-tightening procedures (via cutting planes, or cuts). While there is a wealth of computational experience behind existing cutting strategies, there is simultaneously a relative lack of theoretical explanations for these choices, and for the tradeoffs involved therein. Recent papers have explored abstract models for branching and for comparing cuts with branch and bound. However, to model practice, it is crucial to understand the impact of jointly considering branching and cutting decisions. In this paper, we provide a framework for analyzing how cuts affect the size of branch-and-cut trees, as well as their impact on solution time. Our abstract model captures some of the key characteristics of real-world phenomena in branch-and-cut experiments, regarding whether to generate cuts only at the root or throughout the tree, how many rounds of cuts to add before starting to branch, and why cuts seem to exhibit nonmonotonic effects on the solution process.
Recently, Cygan, Kowalik, and Wykurz [IPL 2009] gave sub-exponential-time approximation algorithms for the Set-Cover problem with approximation ratios better than ln(n). In light of this result, it is natural to … Recently, Cygan, Kowalik, and Wykurz [IPL 2009] gave sub-exponential-time approximation algorithms for the Set-Cover problem with approximation ratios better than ln(n). In light of this result, it is natural to ask whether such improvements can be achieved using lift-and-project methods. We present a simpler combinatorial algorithm which has nearly the same time-approximation tradeoff as the algorithm of Cygan et al., and which lends itself naturally to a lift-and-project based approach. At a high level, our approach is similar to the recent work of Karlin, Mathieu, and Nguyen [IPCO 2011], who examined a known PTAS for Knapsack (similar to our combinatorial Set-Cover algorithm) and its connection to hierarchies of LP and SDP relaxations for Knapsack. For Set-Cover, we show that, indeed, using the trick of lifting the objective function, we can match the performance of our combinatorial algorithm using the LP hierarchy of Lovasz and Schrijver. We also show that this trick is essential: even in the stronger LP hierarchy of Sherali and Adams, the integrality gap remains at least (1-eps) ln(n) at level Omega(n) (when the objective function is not lifted). As shown by Aleknovich, Arora, and Tourlakis [STOC 2005], Set-Cover relaxations stemming from SDP hierarchies (specifically, LS+) have similarly large integrality gaps. This stands in contrast to Knapsack, where Karlin et al. showed that the (much stronger) Lasserre SDP hierarchy reduces the integrality gap to (1+eps) at level O(1). For completeness, we show that LS+ also reduces the integrality gap for Knapsack to (1+eps). This result may be of independent interest, as our LS+ based rounding and analysis are rather different from those of Karlin et al., and to the best of our knowledge this is the first explicit demonstration of such a reduction in the integrality gap of LS+ relaxations after few rounds.
We study integrality gap (IG) lower bounds on strong LP and SDP relaxations derived by the Sherali-Adams (SA), Lovasz-Schrijver-SDP (LS+), and Sherali-Adams-SDP (SA+) lift-and-project (L&P) systems for the t-Partial-Vertex-Cover (t-PVC) … We study integrality gap (IG) lower bounds on strong LP and SDP relaxations derived by the Sherali-Adams (SA), Lovasz-Schrijver-SDP (LS+), and Sherali-Adams-SDP (SA+) lift-and-project (L&P) systems for the t-Partial-Vertex-Cover (t-PVC) problem, a variation of the classic Vertex-Cover problem in which only t edges need to be covered. t-PVC admits a 2-approximation using various algorithmic techniques, all relying on a natural LP relaxation. Starting from this LP relaxation, our main results assert that for every epsilon > 0, level-Theta(n) LPs or SDPs derived by all known L&P systems that have been used for positive algorithmic results (but the Lasserre hierarchy) have IGs at least (1-epsilon)n/t, where n is the number of vertices of the input graph. Our lower bounds are nearly tight. Our results show that restricted yet powerful models of computation derived by many L&P systems fail to witness c-approximate solutions to t-PVC for any constant c, and for t = O(n). This is one of the very few known examples of an intractable combinatorial optimization problem for which LP-based algorithms induce a constant approximation ratio, still lift-and-project LP and SDP tightenings of the same LP have unbounded IGs. We also show that the SDP that has given the best algorithm known for t-PVC has integrality gap n/t on instances that can be solved by the level-1 LP relaxation derived by the LS system. This constitutes another rare phenomenon where (even in specific instances) a static LP outperforms an SDP that has been used for the best approximation guarantee for the problem at hand. Finally, one of our main contributions is that we make explicit of a new and simple methodology of constructing solutions to LP relaxations that almost trivially satisfy constraints derived by all SDP L&P systems known to be useful for algorithmic positive results (except the La system).
Ce travail porte sur la resolution exacte du probleme de sac-a-dos bi-objectif bi-dimensionnel, en utilisant un algorithme de branch-and-cut. Cet algorithme associe les idees des methodes de plans coupants et … Ce travail porte sur la resolution exacte du probleme de sac-a-dos bi-objectif bi-dimensionnel, en utilisant un algorithme de branch-and-cut. Cet algorithme associe les idees des methodes de plans coupants et de l’algorithme du branch-and-bound. L’algorithme de branch-and-bound (aussi appele procedure de separation et evaluation) repose, comme son nom l’indique, sur l’evaluation de sous-problemes. La relaxation convexe s’est montree efficace en pratique pour l’evaluation de sous-problemes pour plusieurs problemes bi-objectif, mais est couteuse pour le probleme du sac-a-dos bi-objectif bi-dimensionnel. La relaxation continue peut etre resolue, relativement facilement, par l’algorithme du simplexe parametrique, mais l’ensemble bornant superieur obtenu est considerablement moins serre. Par consequent, lorsque cette relaxation est utilisee, les arbres de recherche sont generalement grands. Le but de ce travail est d’ameliorer la qualite de l’ensemble bornant superieur obtenu par la relaxation continue, en introduisant des inegalites de couverture, a chaque nœud de l’algorithme de branch-and-bound. L’algorithme devient donc un branch-and-cut. Nous comparons les performances obtenues considerant plusieurs variantes des inegalites de couverture, telles que les couverture etendues et les couvertures augmentees (lifted cover inequalities en anglais). La methode est validees experimentalement.
Sparse cutting-planes are often the ones used in mixed-integer programing (MIP) solvers, since they help in solving the linear programs encountered during branch-&-bound more efficiently. However, how well can we … Sparse cutting-planes are often the ones used in mixed-integer programing (MIP) solvers, since they help in solving the linear programs encountered during branch-&-bound more efficiently. However, how well can we approximate the integer hull by just using sparse cutting-planes? In order to understand this question better, given a polyope $P$ (e.g. the integer hull of a MIP), let $P^k$ be its best approximation using cuts with at most $k$ non-zero coefficients. We consider $d(P, P^k) = \max_{x \in P^k} \left(min_{y \in P} \| x - y\|\right)$ as a measure of the quality of sparse cuts. In our first result, we present general upper bounds on $d(P, P^k)$ which depend on the number of vertices in the polytope and exhibits three phases as $k$ increases. Our bounds imply that if $P$ has polynomially many vertices, using half sparsity already approximates it very well. Second, we present a lower bound on $d(P, P^k)$ for random polytopes that show that the upper bounds are quite tight. Third, we show that for a class of hard packing IPs, sparse cutting-planes do not approximate the integer hull well, that is $d(P, P^k)$ is large for such instances unless $k$ is very close to $n$. Finally, we show that using sparse cutting-planes in extended formulations is at least as good as using them in the original polyhedron, and give an example where the former is actually much better.
Branch-and-cut is the most widely used algorithm for solving integer programs, employed by commercial solvers like CPLEX and Gurobi. Branch-and-cut has a wide variety of tunable parameters that have a … Branch-and-cut is the most widely used algorithm for solving integer programs, employed by commercial solvers like CPLEX and Gurobi. Branch-and-cut has a wide variety of tunable parameters that have a huge impact on the size of the search tree that it builds, but are challenging to tune by hand. An increasingly popular approach is to use machine learning to tune these parameters: using a training set of integer programs from the application domain at hand, the goal is to find a configuration with strong predicted performance on future, unseen integer programs from the same domain. If the training set is too small, a configuration may have good performance over the training set but poor performance on future integer programs. In this paper, we prove sample complexity guarantees for this procedure, which bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cutting plane selection, and are sharper and more general than those found in prior research.
Branch-and-cut is the most widely used algorithm for solving integer programs, employed by commercial solvers like CPLEX and Gurobi. Branch-and-cut has a wide variety of tunable parameters that have a … Branch-and-cut is the most widely used algorithm for solving integer programs, employed by commercial solvers like CPLEX and Gurobi. Branch-and-cut has a wide variety of tunable parameters that have a huge impact on the size of the search tree that it builds, but are challenging to tune by hand. An increasingly popular approach is to use machine learning to tune these parameters: using a training set of integer programs from the application domain at hand, the goal is to find a configuration with strong predicted performance on future, unseen integer programs from the same domain. If the training set is too small, a configuration may have good performance over the training set but poor performance on future integer programs. In this paper, we prove sample complexity guarantees for this procedure, which bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cutting plane selection, and are sharper and more general than those found in prior research.
We introduce a natural knapsack intersection hierarchy for strengthening linear programming relaxations of packing integer programs, i.e., $\max\{w^Tx:x\in P\cap\{0,1\}^n\}$ where $P=\{x\in[0,1]^n:Ax \leq b\}$ and $A,b,w\ge0$. The $t^{th}$ level $P^{t}$ corresponds … We introduce a natural knapsack intersection hierarchy for strengthening linear programming relaxations of packing integer programs, i.e., $\max\{w^Tx:x\in P\cap\{0,1\}^n\}$ where $P=\{x\in[0,1]^n:Ax \leq b\}$ and $A,b,w\ge0$. The $t^{th}$ level $P^{t}$ corresponds to adding cuts associated with the integer hull of the intersection of any $t$ knapsack constraints (rows of the constraint matrix). This model captures the maximum possible strength of "$t$-row cuts", an approach often used by solvers for small $t$. If $A$ is $m \times n$, then $P^m$ is the integer hull of $P$ and $P^1$ corresponds to adding cuts for each associated single-row knapsack problem. Thus, even separating over $P^1$ is NP-hard. However, for fixed $t$ and any $\epsilon>0$, results of Pritchard imply there is a polytime $(1+\epsilon)$-approximation for $P^{t}$. We then investigate the hierarchy's strength in the context of the well-studied all-or-nothing flow problem in trees (also called unsplittable flow on trees). For this problem, we show that the integrality gap of $P^t$ is $O(n/t)$ and give examples where the gap is $\Omega(n/t)$. We then examine the stronger formulation $P_{\text{rank}}$ where all rank constraints are added. For $P_{\text{rank}}^t$, our best lower bound drops to $\Omega(1/c)$ at level $t=n^c$ for any $c>0$. Moreover, on a well-known class of "bad instances" due to Friggstad and Gao, we show that we can achieve this gap; hence a constant integrality gap for these instances is obtained at level $n^c$.
Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used … Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we prove the first guarantees for learning high-performing cut-selection policies tailored to the instance distribution at hand using samples. We first bound the sample complexity of learning cutting planes from the canonical family of Chvatal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used … Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we prove the first guarantees for learning high-performing cut-selection policies tailored to the instance distribution at hand using samples. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
We consider lift-and-project methods for combinatorial optimization problems and focus mostly on those lift-and-project methods which generate polyhedral relaxations of the convex hull of integer solutions. We introduce many new … We consider lift-and-project methods for combinatorial optimization problems and focus mostly on those lift-and-project methods which generate polyhedral relaxations of the convex hull of integer solutions. We introduce many new variants of Sherali--Adams and Bienstock--Zuckerberg operators. These new operators fill the spectrum of polyhedral lift-and-project operators in a way which makes all of them more transparent, easier to relate to each other, and easier to analyze. We provide new techniques to analyze the worst-case performances as well as relative strengths of these operators in a unified way. In particular, using the new techniques and a result of Mathieu and Sinclair from 2009, we prove that the polyhedral Bienstock--Zuckerberg operator requires at least $\sqrt{2n}- \frac{3}{2}$ iterations to compute the matching polytope of the $(2n+1)$-clique. We further prove that the operator requires approximately $\frac{n}{2}$ iterations to reach the stable set polytope of the $n$-clique, if we start with the fractional stable set polytope. Last, we show that some of the worst-case instances for the positive semidefinite Lovász--Schrijver lift-and-project operator are also bad instances for the strongest variants of the Sherali--Adams operator with positive semidefinite strengthenings, and discuss some consequences for integrality gaps of convex relaxations.
We consider lift-and-project methods for combinatorial optimization problems and focus mostly on those lift-and-project methods which generate polyhedral relaxations of the convex hull of integer solutions. We introduce many new … We consider lift-and-project methods for combinatorial optimization problems and focus mostly on those lift-and-project methods which generate polyhedral relaxations of the convex hull of integer solutions. We introduce many new variants of Sherali--Adams and Bienstock--Zuckerberg operators. These new operators fill the spectrum of polyhedral lift-and-project operators in a way which makes all of them more transparent, easier to relate to each other, and easier to analyze. We provide new techniques to analyze the worst-case performances as well as relative strengths of these operators in a unified way. In particular, using the new techniques and a result of Mathieu and Sinclair from 2009, we prove that the polyhedral Bienstock--Zuckerberg operator requires at least $\sqrt{2n}- \frac{3}{2}$ iterations to compute the matching polytope of the $(2n+1)$-clique. We further prove that the operator requires approximately $\frac{n}{2}$ iterations to reach the stable set polytope of the $n$-clique, if we start with the fractional stable set polytope. Lastly, we show that some of the worst-case instances for the positive semidefinite Lov\'asz--Schrijver lift-and-project operator are also bad instances for the strongest variants of the Sherali--Adams operator with positive semidefinite strengthenings, and discuss some consequences for integrality gaps of convex relaxations.
We consider lift-and-project methods for combinatorial optimization problems and focus mostly on those lift-and-project methods which generate polyhedral relaxations of the convex hull of integer solutions. We introduce many new … We consider lift-and-project methods for combinatorial optimization problems and focus mostly on those lift-and-project methods which generate polyhedral relaxations of the convex hull of integer solutions. We introduce many new variants of Sherali--Adams and Bienstock--Zuckerberg operators. These new operators fill the spectrum of polyhedral lift-and-project operators in a way which makes all of them more transparent, easier to relate to each other, and easier to analyze. We provide new techniques to analyze the worst-case performances as well as relative strengths of these operators in a unified way. In particular, using the new techniques and a result of Mathieu and Sinclair from 2009, we prove that the polyhedral Bienstock--Zuckerberg operator requires at least $\sqrt{2n}- \frac{3}{2}$ iterations to compute the matching polytope of the $(2n+1)$-clique. We further prove that the operator requires approximately $\frac{n}{2}$ iterations to reach the stable set polytope of the $n$-clique, if we start with the fractional stable set polytope. Lastly, we show that some of the worst-case instances for the positive semidefinite Lov\'asz--Schrijver lift-and-project operator are also bad instances for the strongest variants of the Sherali--Adams operator with positive semidefinite strengthenings, and discuss some consequences for integrality gaps of convex relaxations.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 19 October 2017Accepted: 19 August 2020Published online: 20 May 2021Keywordsextended formulations, nonnegative rank, … Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 19 October 2017Accepted: 19 August 2020Published online: 20 May 2021Keywordsextended formulations, nonnegative rank, communication complexity, rectangles, juntas, lifting theoremsAMS Subject Headings68Q17Publication DataISSN (print): 0097-5397ISSN (online): 1095-7111Publisher: Society for Industrial and Applied MathematicsCODEN: smjcat
Chvátal--Gomory cuts (CG-cuts) and the Bienstock--Zuckerberg hierarchy capture useful linear programs that the standard bounded degree sum-of-squares (SoS) hierarchy fails to capture. In this paper we present a novel polynomial … Chvátal--Gomory cuts (CG-cuts) and the Bienstock--Zuckerberg hierarchy capture useful linear programs that the standard bounded degree sum-of-squares (SoS) hierarchy fails to capture. In this paper we present a novel polynomial time SoS hierarchy for 0/1 problems with a custom subspace of high degree polynomials (not the standard subspace of low degree polynomials). We show that the new SoS hierarchy recovers the Bienstock--Zuckerberg hierarchy. Our result implies a linear program that reproduces the Bienstock--Zuckerberg hierarchy as a polynomial-sized, efficiently constructible extended formulation that satisfies all constant pitch inequalities. The construction is also very simple, and it is fully defined by giving the supporting polynomials. Moreover, for a class of polytopes (e.g., set cover and packing problems), the resulting SoS hierarchy optimizes in polynomial time over the polytope resulting from any constant rounds of CG-cuts, up to an arbitrarily small error in the solution value. Arguably, this is the first example where different basis functions can be useful in asymmetric situations to obtain a hierarchy of relaxations.
Chvatal-Gomory (CG) cuts and the Bienstock-Zuckerberg hierarchy capture useful linear programs that the standard bounded degree Lasserre/Sum-of-Squares SOS hierarchy fails to capture. In this paper we present a novel polynomial … Chvatal-Gomory (CG) cuts and the Bienstock-Zuckerberg hierarchy capture useful linear programs that the standard bounded degree Lasserre/Sum-of-Squares SOS hierarchy fails to capture. In this paper we present a novel polynomial time SOS hierarchy for 0/1 problems with a custom subspace of high degree polynomials (not the standard subspace of low-degree polynomials). We show that the new SOS hierarchy recovers the Bienstock-Zuckerberg hierarchy. Our result implies a linear program that reproduces the Bienstock-Zuckerberg hierarchy as a polynomial sized, efficiently constructive extended formulation that satisfies all constant pitch inequalities. The construction is also very simple, and it is fully defined by giving the supporting polynomials. Moreover, for a class of polytopes (e.g. set covering and packing problems), the resulting SOS hierarchy optimizes in polynomial time over the polytope resulting from any constant rounds of CG-cuts, up to an arbitrarily small error. Arguably, this is the first example where different basis functions can be useful in asymmetric situations to obtain a hierarchy of relaxations.
We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no … We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels of the Sherali-Adams hierarchy on instances of size $n$. It was proved by Chan et al. [FOCS 2013] that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy.. Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than the basic LP. Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which $\Omega(\log \log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels.
We discuss a new conceptual framework for the convexification of discrete optimization problems, and a general technique for obtaining approximations to the convex hull of the feasible set. The concepts … We discuss a new conceptual framework for the convexification of discrete optimization problems, and a general technique for obtaining approximations to the convex hull of the feasible set. The concepts come from disjunctive programming and the key tool is a description of the convex hull of a union of polyhedra in terms of a higher dimensional polyhedron. Although this description was known for several years, only recently was it shown by Jeroslow and Lowe to yield improved representations of discrete optimization problems. We express the feasible set of a discrete optimization problem as the intersection (conjunction) of unions of polyhedra, and define an operation that takes one such expression into another, equivalent one, with fewer conjuncts. We then introduce a class of relaxations based on replacing each conjunct (union of polyhedra) by its convex hull. The strength of the relaxations increases as the number of conjuncts decreases, and the class of relaxations forms a hierarchy that spans the spectrum between the common linear programming relaxation, and the convex hull of the feasible set itself. Instances where this approach has advantages include critical path problems in disjunctive graphs, network synthesis problems, certain fixed charge network flow problems, etc. We illustrate the approach on the first of these problems, which is a model for machine sequencing.
In this paper a reformulation technique is presented that takes a given linear zero-one programming problem, converts it into a zero-one polynomial programming problem, and then relinearizes it into an … In this paper a reformulation technique is presented that takes a given linear zero-one programming problem, converts it into a zero-one polynomial programming problem, and then relinearizes it into an extended linear program. It is shown that the strength of the resulting reformulation depends on the degree of the terms used to produce the polynomial program at the intermediate step of this method. In fact, as this degree varies from one up to the number of variables in the problem, a hierarchy of sharper representations is obtained with the final relaxation representing the convex hull of feasible solutions. The reformulation technique readily extends to produce a similar hierarchy of linear relaxations for zero-one polynomial programming problems. A characterization of the convex hull in the original variable space is also available through a projection process. The structure of this convex hull characterization (or its other relaxations) can be exploited to generate strong or facetial valid inequalities through appropriate surrogates in a computational framework. The surrogation process can also be used to study various classes of facets for different combinatorial optimization problems. Some examples are given to illustrate this point.
Sherali and Adams (1990), Lovász and Schrijver (1991) and, recently, Lasserre (2001b) have constructed hierarchies of successive linear or semidefinite relaxations of a 0–1 polytope P ⫅ ℝ n converging … Sherali and Adams (1990), Lovász and Schrijver (1991) and, recently, Lasserre (2001b) have constructed hierarchies of successive linear or semidefinite relaxations of a 0–1 polytope P ⫅ ℝ n converging to P in n steps. Lasserre's approach uses results about representations of positive polynomials as sums of squares and the dual theory of moments. We present the three methods in a common elementary framework and show that the Lasserre construction provides the tightest relaxations of P. As an application this gives a direct simple proof for the convergence of the Lasserre's hierarchy. We describe applications to the stable set polytope and to the cut polytope.
It has been recognized recently that to represent a polyhedron as the projection of a higher-dimensional, but simpler, polyhedron, is a powerful tool in polyhedral combinatorics. A general method is … It has been recognized recently that to represent a polyhedron as the projection of a higher-dimensional, but simpler, polyhedron, is a powerful tool in polyhedral combinatorics. A general method is developed to construct higher-dimensional polyhedra (or, in some cases, convex sets) whose projection approximates the convex hull of 0–1 valued solutions of a system of linear inequalities. An important feature of these approximations is that one can optimize any linear objective function over them in polynomial time. In the special case of the vertex packing polytope, a sequence of systems of inequalities is obtained such that the first system already includes clique, odd hole, odd antihole, wheel, and orthogonality constraints. In particular, for perfect (and many other) graphs, this first system gives the vertex packing polytope. For various classes of graphs, including t-perfect graphs, it follows that the stable set polytope is the projection of a polytope with a polynomial number of facets. An extension of the method is also discussed which establishes a connection with certain submodular functions and the Möbius function of a lattice.
Graphs possessing a certain property are often characterized in terms of a type of configuration or subgraph which they cannot possess. For example, a graph is totally disconnected (or, has … Graphs possessing a certain property are often characterized in terms of a type of configuration or subgraph which they cannot possess. For example, a graph is totally disconnected (or, has chromatic number one) if and only if it contains no lines; a graph is a forest (or, has point-arboricity one) if and only if it contains no cycles. Chartrand, Geller, and Hedetniemi [ 2 ] defined a graph to have property P n if it contains no subgraph homeomorphic from the complete graph K n +1 or the complete bipartite graph For the first four natural numbers n , the graphs with property P n are exactly the totally disconnected graphs, forests, outerplanar and planar graphs, respectively. This unification suggested the extension of many results known to hold for one of the above four classes of graphs to one or more of the remaining classes.
Jose L. Walteros and Austin Buchanan Jose L. Walteros and Austin Buchanan
I n the classical linear programming problem the behaviour of continuous, nonnegative variables subject to a system of linear inequalities is investigated.One possible generalization of this problem is to relax … I n the classical linear programming problem the behaviour of continuous, nonnegative variables subject to a system of linear inequalities is investigated.One possible generalization of this problem is to relax the continuity condition on the variables.This paper presents a simple numerical algorithm for the solution of programming problems in which some or all of the variables can take only discrete values.The algorithm requires no special techniques beyond those used in ordinary linear programming, and lends itself to automatic computing.Its use is illustrated on two ~lumerical examples.