Author Description

Login to generate an author description

Ask a Question About This Mathematician

We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
We prove that finding an ε-approximate Nash equilibrium is PPAD-complete for constant ε and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has … We prove that finding an ε-approximate Nash equilibrium is PPAD-complete for constant ε and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has only two actions.
We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, … We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, for every δ>0 there exists a constant ε>0 such that computing a (1+ε)-approximation to the Bichromatic Closest Pair requires Ω(n2−δ) time. In particular, this implies a near-linear query time for Approximate Nearest Neighbor search with polynomial preprocessing time.
We prove that there exists a constant ε > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ε-approximate Nash equilibrium in a two-player (n × n) … We prove that there exists a constant ε > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ε-approximate Nash equilibrium in a two-player (n × n) game requires quasi-polynomial time, nlog1-o(1) n. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [54]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP), this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an ε-approximate Nash equilibrium in a game with n players requires 2Ω(n) oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [43], Babichenko [13], and Chen et al. [28]. In fact, our results for n-player games are stronger: they hold with respect to the (ε,δ)-WeakNash relaxation recently introduced by Babichenko et al. [15].
For a constant ϵ, we prove a (N) lower bound on the (randomized) communication complexity of ϵ-Nash equilibrium in two-player N x N games. For n-player binary-action games we prove … For a constant ϵ, we prove a (N) lower bound on the (randomized) communication complexity of ϵ-Nash equilibrium in two-player N x N games. For n-player binary-action games we prove an exp(n) lower bound for the (randomized) communication complexity of (ϵ,ϵ)-weak approximate Nash equilibrium, which is a profile of mixed actions such that at least (1-ϵ)-fraction of the players are ϵ-best replying.
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both … We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both problems. This is close to the Omega(n/log n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log(n) log(r))-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(loglog(n) log(r)) factor) an open question posed to us by Bobby Kleinberg.
We prove that finding an $\epsilon$-approximate Nash equilibrium is $\mathsf{PPAD}$--complete for constant $\epsilon$ and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has … We prove that finding an $\epsilon$-approximate Nash equilibrium is $\mathsf{PPAD}$--complete for constant $\epsilon$ and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has only two actions. As corollaries, we also prove similar inapproximability results for Bayesian Nash equilibrium in a two-player incomplete information game with a constant number of actions, for relative $\epsilon$-well supported Nash equilibrium in a two-player game, for market equilibrium in a nonmonotone market, for the generalized circuit problem defined by Chen, Deng, and Teng [J. ACM, 56 (2009)], and for approximate competitive equilibrium from equal incomes with indivisible goods.
We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x ∈ {0, 1} <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> to a CNF formula φ is shared between two … We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x ∈ {0, 1} <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> to a CNF formula φ is shared between two parties, where Alice knows x <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> , ... , x <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n/2</sub> , Bob knows x <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n/2+1</sub> , . . . , xn, and both parties know φ. The goal is to have Alice and Bob jointly write a PCP that x satisfies φ, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of x. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no trulysubquadratic approximation algorithms for Maximum Inner Product over {0, 1}-vectors, LCS Closest Pair over permutations, Approximate Partial Match, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first three problems we obtain nearly-polynomial factors of 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">(log n)</sup> <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1-o(1)</sup> ; only (1+o(1))-factor lower bounds (under SETH) were known before. As an additional feature of our reduction, we obtain new SETH lower bounds for the exact "monochromatic" Closest Pair problem in the Euclidean, Manhattan, and Hamming metrics.
In this paper we study the adaptivity of submodular maximization. Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel. Adaptivity … In this paper we study the adaptivity of submodular maximization. Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel. Adaptivity is a fundamental concept that is heavily studied across a variety of areas in computer science, largely due to the need for parallelizing computation. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, it is well known that a simple greedy algorithm achieves a 1 – 1/e approximation [NWF78] and that this approximation is optimal for polynomial-time algorithms [NW78]. Somewhat surprisingly, despite extensive efforts on submodular optimization for large-scale datasets, until very recently there was no known algorithm that achieves a constant factor approximation for this problem whose adaptivity is sublinear in the size of the ground set n.Recent work by [BS18] describes an algorithm that obtains an approximation arbitrarily close to 1/3 in O(log n) adaptive rounds and shows that no algorithm can obtain a constant factor approximation in õ(log n) adaptive rounds. This approach achieves an exponential speedup in adaptivity (and parallel running time) at the expense of approximation quality.In this paper we describe a novel approach that yields an algorithm whose approximation is arbitrarily close to the optimal 1 – 1/e guarantee in O(log n) adaptive rounds. This algorithm therefore achieves an exponential speedup in parallel running time for submodular maximization at the expense of an arbitrarily small loss in approximation quality. This guarantee is optimal in both approximation and adaptivity, up to lower order terms.
We prove that there exists a constant $\epsilon>0$ such that, assuming the Exponential Time Hypothesis for PPAD, computing an $\epsilon$-approximate Nash equilibrium in a two-player (nXn) game requires quasi-polynomial time, … We prove that there exists a constant $\epsilon>0$ such that, assuming the Exponential Time Hypothesis for PPAD, computing an $\epsilon$-approximate Nash equilibrium in a two-player (nXn) game requires quasi-polynomial time, $n^{\log^{1-o(1)} n}$. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [LMM03]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP); this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an $\epsilon$-approximate Nash equilibrium in a game with n players requires $2^{\Omega(n)}$ oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [HN13], Babichenko [Bab14], and Chen et al. [CCT15]. In fact, our results for n-player games are stronger: they hold with respect to the $(\epsilon,\delta)$-WeakNash relaxation recently introduced by Babichenko et al. [BPR16].
In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework … In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint.
In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. … In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. In AIM, a set of content providers and consumers form a bipartite network while consumers also form their social network, and influence propagates from the content providers to consumers and among consumers in the social network following the independent cascade model. An advertiser needs to select a subset of seed content providers and a subset of seed consumers, such that the influence from the seed providers passing through the seed consumers could reach a large number of consumers in the social network in expectation.
Competitive equilibrium with equal incomes (CEEI) is a well-known fair allocation mechanism [Foley67:Resource, Varian74: Equity, Thomson85:Theories]; however, for indivisible resources a CEEI may not exist. It was shown in Budish … Competitive equilibrium with equal incomes (CEEI) is a well-known fair allocation mechanism [Foley67:Resource, Varian74: Equity, Thomson85:Theories]; however, for indivisible resources a CEEI may not exist. It was shown in Budish [2011] that in the case of indivisible resources there is always an allocation, called A-CEEI, that is approximately fair, approximately truthful, and approximately efficient, for some favorable approximation parameters. This approximation is used in practice to assign business school students to classes. In this paper we show that finding the A-CEEI allocation guaranteed to exist by Budish's theorem is PPAD-complete. We further show that finding an approximate equilibrium with better approximation guarantees is even harder: NP-complete.
We show that the edit distance between two strings of length n can be computed via a randomized algorithm within a factor of f(є) in n 1+є time as long … We show that the edit distance between two strings of length n can be computed via a randomized algorithm within a factor of f(є) in n 1+є time as long as the edit distance is at least n 1−δ for some δ(є) > 0.
Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible … Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible in truly subquadratic time. A special case of LCS wherein each character appears at most once in every string is equivalent to the longest increasing subsequence problem (LIS) which can be solved in quasilinear time. In this work, we present novel algorithms for approximating LCS in truly subquadratic time and LIS in truly sublinear time. Our approximation factors depend on the ratio of the optimal solution size over the input size. We denote this ratio by λ and obtain the following results for LCS and LIS without any prior knowledge of λ. • A truly subquadratic time algorithm for LCS with approximation factor O(λ^3). • A truly sublinear time algorithm for LIS with approximation factor O(λ^3). Triangle inequality was recently used by Boroujeni et al. [1] and Chakraborty et al.[2] to present new approximation algorithms for edit distance. Our techniques for LCS extend the notion of triangle inequality to non-metric settings.
Competitive equilibrium from equal incomes (CEEI) is a well-known fair allocation mechanism with desirable fairness and efficiency properties; however, with indivisible resources, a CEEI may not exist [Foley 1967; Varian … Competitive equilibrium from equal incomes (CEEI) is a well-known fair allocation mechanism with desirable fairness and efficiency properties; however, with indivisible resources, a CEEI may not exist [Foley 1967; Varian 1974; Thomson and Varian 1985]. It was shown in Budish [2011] that in the case of indivisible resources, there is always an allocation, called A-CEEI , that is approximately fair, approximately truthful, and approximately efficient for some favorable approximation parameters. A heuristic search that attempts to find this approximation is used in practice to assign business school students to courses. In this article, we show that finding the A-CEEI allocation guaranteed to exist by Budish’s theorem is PPAD-complete. We further show that finding an approximate equilibrium with better approximation guarantees is even harder: NP-complete.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2016 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular FunctionsAshwinkumar Badanidiyuru, Christos Papadimitriou, Aviad Rubinstein, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2016 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular FunctionsAshwinkumar Badanidiyuru, Christos Papadimitriou, Aviad Rubinstein, Lior Seeman, and Yaron SingerAshwinkumar Badanidiyuru, Christos Papadimitriou, Aviad Rubinstein, Lior Seeman, and Yaron Singerpp.414 - 429Chapter DOI:https://doi.org/10.1137/1.9781611974331.ch31PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract The Adaptive Seeding problem is an algorithmic challenge motivated by influence maximization in social networks: One seeks to select among certain accessible nodes in a network, and then select, adaptively, among neighbors of those nodes as they become accessible in order to maximize a global objective function. More generally, adaptive seeding is a stochastic optimization framework where the choices in the first stage affect the realizations in the second stage, over which we aim to optimize. Our main result is a (1 – 1/e)2-approximation for the adaptive seeding problem for any monotone submodular function. While adaptive policies are often approximated via non-adaptive policies, our algorithm is based on a novel method we call locally-adaptive policies. These policies combine a non-adaptive global structure, with local adaptive optimizations. This method enables the (1–1/e)2-approximation for general monotone submodular functions and circumvents some of the impossibilities associated with non-adaptive policies. We also introduce a fundamental problem in submodular optimization that may be of independent interest: given a ground set of elements where every element appears with some small probability, find a set of expected size at most k that has the highest expected value over the realization of the elements. We show a surprising result: there are classes of monotone submodular functions (including coverage) that can be approximated almost optimally as the probability vanishes. For general monotone submodular functions we show via a reduction from Planted-Clique that approximations for this problem are not likely to be obtainable. This optimization problem is an important tool for adaptive seeding via non-adaptive policies, and its hardness motivates the introduction of locally-adaptive policies we use in the main result. Previous chapter Next chapter RelatedDetails Published:2016eISBN:978-1-61197-433-1 https://doi.org/10.1137/1.9781611974331Book Series Name:ProceedingsBook Code:PRDA16Book Pages:viii + 2106
In this paper we study submodular maximization under a matroid constraint in the adaptive complexity model. This model was recently introduced in the context of submodular optimization to quantify the … In this paper we study submodular maximization under a matroid constraint in the adaptive complexity model. This model was recently introduced in the context of submodular optimization to quantify the information theoretic complexity of black-box optimization in a parallel computation model. Informally, the adaptivity of an algorithm is the number of sequential rounds it makes when each round can execute polynomially-many function evaluations in parallel. Since submodular optimization is regularly applied on large datasets we seek algorithms with low adaptivity to enable speedups via parallelization. Consequently, a recent line of work has been devoted to designing constant factor approximation algorithms for maximizing submodular functions under various constraints in the adaptive complexity model.
We introduce fast-decodable indexing schemes for edit distance which can be used to speed up edit distance computations to near-linear time if one of the strings is indexed by an … We introduce fast-decodable indexing schemes for edit distance which can be used to speed up edit distance computations to near-linear time if one of the strings is indexed by an indexing string $I$. In particular, for every length $n$ and every $\varepsilon >0$, one can in near linear time construct a string $I \in \Sigma'^n$ with $|\Sigma'| = O_{\varepsilon}(1)$, such that, indexing any string $S \in \Sigma^n$, symbol-by-symbol, with $I$ results in a string $S' \in \Sigma''^n$ where $\Sigma'' = \Sigma \times \Sigma'$ for which edit distance computations are easy, i.e., one can compute a $(1+\varepsilon)$-approximation of the edit distance between $S'$ and any other string in $O(n \text{poly}(\log n))$ time. Our indexing schemes can be used to improve the decoding complexity of state-of-the-art error correcting codes for insertions and deletions. In particular, they lead to near-linear time decoding algorithms for the insertion-deletion codes of [Haeupler, Shahrasbi; STOC `17] and faster decoding algorithms for list-decodable insertion-deletion codes of [Haeupler, Shahrasbi, Sudan; ICALP `18]. Interestingly, the latter codes are a crucial ingredient in the construction of fast-decodable indexing schemes.
We introduce a novel framework of Prophet Inequalities for combinatorial valuation functions. For a (non-monotone) submodular objective function over an arbitrary matroid feasibility constraint, we give an O(1)-competitive algorithm. For … We introduce a novel framework of Prophet Inequalities for combinatorial valuation functions. For a (non-monotone) submodular objective function over an arbitrary matroid feasibility constraint, we give an O(1)-competitive algorithm. For a monotone subadditive objective function over an arbitrary downward- closed feasibility constraint, we give an O(log n log2 r)- competitive algorithm (where r is the cardinality of the largest feasible subset).Inspired by the proof of our subadditive prophet inequality, we also obtain an O(log n · log2 r)-competitive algorithm for the Secretary Problem with a monotone subadditive objective function subject to an arbitrary downward-closed feasibility constraint. Even for the special case of a cardinality feasibility constraint, our algorithm circumvents an lower bound by Bateni, Hajiaghayi, and Zadimoghaddam [10] in a restricted query model.En route to our submodular prophet inequality, we prove a technical result of independent interest: we show a variant of the Correlation Gap Lemma [14, 1] for nonmonotone submodular functions.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Reducing approximate Longest Common Subsequence to approximate Edit DistanceAviad Rubinstein and Zhao SongAviad Rubinstein and … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Reducing approximate Longest Common Subsequence to approximate Edit DistanceAviad Rubinstein and Zhao SongAviad Rubinstein and Zhao Songpp.1591 - 1600Chapter DOI:https://doi.org/10.1137/1.9781611975994.98PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Given a pair of n-character strings, the problems of computing their Longest Common Subsequence and Edit Distance have been extensively studied for decades. For exact algorithms, LCS and Edit Distance (with character insertions and deletions) are equivalent; the state of the art running time is (almost) quadratic in n, and this is tight under plausible fine-grained complexity assumptions. But for approximation algorithms the picture is different: there is a long line of works with improved approximation factors for Edit Distance, but for LCS (with binary strings) only a trivial 1/2-approximation was known. In this work we give a reduction from approximate LCS to approximate Edit Distance, yielding the first efficient (1/2 + ϵ)-approximation algorithm for LCS for some constant ϵ > 0. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D . We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k -demand, additive up to a matroid constraint, or additive up to constraints of any downward-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers. In the second part of the article, we develop a connection between approximately optimal simple mechanisms and approximate revenue monotonicity with respect to buyers’ valuations. Revenue non-monotonicity is the phenomenon that sometimes strictly increasing buyers’ values for every set can strictly decrease the revenue of the optimal mechanism. Using our main result, we derive a bound on how bad this degradation can be (and dub such a bound a proof of approximate revenue monotonicity); we further show that better bounds on approximate monotonicity imply a better analysis of our simple mechanisms.
We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization … We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization [12], use menus of infinite size [9], and may be computationally intractable [8]. This has sparked recent interest in finding simple mechanisms that obtain reasonable approximations to the optimal revenue [10, 15, 3]. In this work we attempt to find the optimal simple mechanism.
We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some … We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation — arguably the simplest meaningful dynamic mechanism design problem imaginable — is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions.
We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1 − … We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1 − e, requires nΩ(log n) time. Our result essentially matches the quasi-polynomial algorithms of Feige and Seltser [FS97] and Barman [Bar15] for this problem, and is the first one to rule out an additive PTAS for Densest k-Subgraph. We further strengthen this result by showing that our lower bound continues to hold when, in the soundness case, even subgraphs smaller by a near-polynomial factor (k′ = k · 2 −Ω(log n)) are assumed to be at most (1 − e)-dense.Our reduction is inspired by recent applications of the birthday technique [AIM14, BKW15]. Our analysis relies on information theoretical machinery and is similar in spirit to analyzing a parallel repetition of two-prover games in which the provers may choose to answer some challenges multiple times, while completely ignoring other challenges.
We prove an N <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2-o(1)</sup> lower bound on the randomized communication complexity of finding an ε-approximate Nash equilibrium (for constant ε>0) in a two-player N×N game. We prove an N <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2-o(1)</sup> lower bound on the randomized communication complexity of finding an ε-approximate Nash equilibrium (for constant ε>0) in a two-player N×N game.
We consider (i) the problem of finding a (possibly mixed) Nash equilibrium in congestion games, and (ii) the problem of finding an (exponential precision) fixed point of the gradient descent … We consider (i) the problem of finding a (possibly mixed) Nash equilibrium in congestion games, and (ii) the problem of finding an (exponential precision) fixed point of the gradient descent dynamics of a smooth function f:[0,1]n → ℝ. We prove that these problems are equivalent. Our result holds for various explicit descriptions of f, ranging from (almost general) arithmetic circuits, to degree-5 polynomials. By a very recent result of [Fearnley et al., STOC 2021], this implies that these problems are PPAD ∩ PLS-complete. As a corollary, we also obtain the following equivalence of complexity classes:
We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment $x \in \{0,1\}^n$ to a CNF formula $\varphi$ is shared between two parties, where Alice knows … We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment $x \in \{0,1\}^n$ to a CNF formula $\varphi$ is shared between two parties, where Alice knows $x_1, \dots, x_{n/2}$, Bob knows $x_{n/2+1},\dots,x_n$, and both parties know $\varphi$. The goal is to have Alice and Bob jointly write a PCP that $x$ satisfies $\varphi$, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of $x$. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of $2^{(\log n)^{1-o(1)}}$; only $(1+o(1))$-factor lower bounds (under SETH) were known before.
We study edit distance computation with preprocessing: the preprocessing algorithm acts on each string separately, and then the query algorithm takes as input the two preprocessed strings. This model is … We study edit distance computation with preprocessing: the preprocessing algorithm acts on each string separately, and then the query algorithm takes as input the two preprocessed strings. This model is inspired by scenarios where we would like to compute edit distance between many pairs in the same pool of strings.
We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a … We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a single additive buyer. We prove that optimal mechanisms are submodular and hence monotone. Furthermore, we show that in the IID case, optimal mechanisms are symmetric. Our characterizations are surprisingly non-trivial, and we show that they fail to extend in several natural ways, e.g. for correlated distributions or more than two items. In particular, this shows that the optimality of symmetric mechanisms does not follow from the symmetry of the IID distribution.
A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over m items, the revenue-maximizing mechanism … A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over m items, the revenue-maximizing mechanism is prohibitively complex. This problem has been addressed using two main approaches: Approximation: the best of two simple mechanisms (sell each item separately, or sell all the items as one bundle) gives 1/6 of the optimal revenue [1]. Enhanced competition: running the simple VCG mechanism with additional m buyers extracts at least the optimal revenue in the original market [17]. Both approaches, however, suffer from severe drawbacks: On the one hand, losing 83% of the revenue is hardly acceptable in any application. On the other hand, attracting a linear number of new buyers may be prohibitive. We show that by combining the two approaches one can achieve the best of both worlds. Specifically, for any constant ε one can obtain a (1-ε) fraction of the optimal revenue by running simple mechanisms --- either selling each item separately or selling all items as a single bundle --- with substantially fewer additional buyers: logarithmic, constant, or even none in some cases.
It is well known that Sparse PCA (Sparse Principal Component Analysis) is NP-hard to solve exactly on worst-case instances. What is the complexity of solving Sparse PCA approximately? Our contributions … It is well known that Sparse PCA (Sparse Principal Component Analysis) is NP-hard to solve exactly on worst-case instances. What is the complexity of solving Sparse PCA approximately? Our contributions include: 1. a simple and efficient algorithm that achieves ann 1=3 -approximation; 2. NP-hardness of approximation to within (1 ), for some small > 0; 3. SSE-hardness of approximation to within any constant factor; and 4. an exp exp p log logn (“quasi-quasi-polynomial”) gap for the standard semidefinite program.
We study generalizations of the Prophet Inequality and Secretary Problem, where the algorithm is restricted to an arbitrary downward-closed set system. For {0,1}-values, we give O(log n)-competitive algorithms for both … We study generalizations of the Prophet Inequality and Secretary Problem, where the algorithm is restricted to an arbitrary downward-closed set system. For {0,1}-values, we give O(log n)-competitive algorithms for both problems. This is close to the \Omega(log n / loglog n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log n log r)-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(log r loglog n) factors) an open question posed to us by Bobby Kleinberg.
We prove that, assuming the exponential time hypothesis, finding an \epsilon-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time. This is tight by [CCDEHT'15] and resolves … We prove that, assuming the exponential time hypothesis, finding an \epsilon-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time. This is tight by [CCDEHT'15] and resolves an open question of [Dughmi'14]. We also prove that finding a multiplicative approximation is NP-hard.
In this paper we study the fine-grained complexity of finding exact and approximate solutions to problems in P. Our main contribution is showing reductions from an exact to an approximate … In this paper we study the fine-grained complexity of finding exact and approximate solutions to problems in P. Our main contribution is showing reductions from an exact to an approximate solution for a host of such problems.As one (notable) example, we show that the Closest-LCS-Pair problem (Given two sets of strings A and B, compute exactly the maximum LCS(a, b) with (a, b) ∊ A × B) is equivalent to its approximation version (under near-linear time reductions, and with a constant approximation factor). More generally, we identify a class of problems, which we call BP-Pair-Class, comprising both exact and approximate solutions, and show that they are all equivalent under near-linear time reductions.Exploring this class and its properties, we also show:•Under the NC-SETH assumption (a significantly more relaxed assumption than SETH), solving any of the problems in this class requires essentially quadratic time.•Modest improvements on the running time of known algorithms (shaving log factors) would imply that NEXP is not in non-uniform NC1.•Finally, we leverage our techniques to show new barriers for deterministic approximation algorithms for LCS.A very important consequence of our results is that they continue to hold in the data structure setting. In particular, it shows that a data structure for approximate Nearest Neighbor Search for LCS (NNSLCS) implies a data structure for exact NNSLCS and a data structure for answering regular expression queries with essentially the same complexity.At the heart of these new results is a deep connection between interactive proof systems for bounded-space computations and the fine-grained complexity of exact and approximate solutions to problems in P. In particular, our results build on the proof techniques from the classical IP = PSPACE result.
We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1 - … We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1 - ∊, requires time. Our result essentially matches the quasi-polynomial algorithms of Feige and Seltser [FS97] and Barman [Bar15] for this problem, and is the first one to rule out an additive PTAS for Densest k-Subgraph. We further strengthen this result by showing that our lower bound continues to hold when, in the soundness case, even subgraphs smaller by a near-polynomial factor are assumed to be at most (1 - ∊)-dense.Our reduction is inspired by recent applications of the "birthday repetition" technique [AIM14, BKW15]. Our analysis relies on information theoretical machinery and is similar in spirit to analyzing a parallel repetition of two- prover games in which the provers may choose to answer some challenges multiple times, while completely ignoring other challenges.
In our recent paper [Rubinstein 2016] we rule out a PTAS for the 2-Player Nash Equilibrium Problem. More precisely, we prove that there exists a constant ϵ &gt; 0 such … In our recent paper [Rubinstein 2016] we rule out a PTAS for the 2-Player Nash Equilibrium Problem. More precisely, we prove that there exists a constant ϵ &gt; 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ϵ-approximate Nash equilibrium in a two-player n × n game requires time n log 1−o(1) n . This matches (up to the o (1) term) the algorithm of Lipton, Markakis, and Mehta [Lipton et al. 2003].
We prove that finding an epsilon-Nash equilibrium in a succinctly representable game with many players is PPAD-hard for constant epsilon. Our proof uses succinct games, i.e. games whose payoff function … We prove that finding an epsilon-Nash equilibrium in a succinctly representable game with many players is PPAD-hard for constant epsilon. Our proof uses succinct games, i.e. games whose payoff function is represented by a circuit. Our techniques build on a recent query complexity lower bound by Babichenko.
We propose a general method for converting online algorithms to local computation algorithms by selecting a random permutation of the input, and simulating running the online algorithm. We bound the … We propose a general method for converting online algorithms to local computation algorithms by selecting a random permutation of the input, and simulating running the online algorithm. We bound the number of steps of the algorithm using a query tree, which models the dependencies between queries. We improve previous analyses of query trees on graphs of bounded degree, and extend the analysis to the cases where the degrees are distributed binomially, and to a special case of bipartite graphs. Using this method, we give a local computation algorithm for maximal matching in graphs of bounded degree, which runs in time and space O(log^3 n). We also show how to convert a large family of load balancing algorithms (related to balls and bins problems) to local computation algorithms. This gives several local load balancing algorithms which achieve the same approximation ratios as the online algorithms, but run in O(log n) time and space. Finally, we modify existing local computation algorithms for hypergraph 2-coloring and k-CNF and use our improved analysis to obtain better time and space bounds, of O(log^4 n), removing the dependency on the maximal degree of the graph from the exponent.
We consider Boolean functions f:{-1,1}^n-&gt;{-1,1} that are close to a sum of independent functions on mutually exclusive subsets of the variables. We prove that any such function is close to … We consider Boolean functions f:{-1,1}^n-&gt;{-1,1} that are close to a sum of independent functions on mutually exclusive subsets of the variables. We prove that any such function is close to just a single function on a single subset. We also consider Boolean functions f:R^n-&gt;{-1,1} that are close, with respect to any product distribution over R^n, to a sum of their variables. We prove that any such function is close to one of the variables. Both our results are independent of the number of variables, but depend on the variance of f. I.e., if f is ε*Var(f)-close to a sum of independent functions or random variables, then it is O(ε)-close to one of the independent functions or random variables, respectively. We prove that this dependence on Var(f) is tight. Our results are a generalization of the Friedgut-Kalai-Naor Theorem [FKN'02], which holds for functions f:{-1,1}^n-&gt;{-1,1} that are close to a linear combination of uniformly distributed Boolean variables.
We give query-efficient algorithms for the global min-cut and the s-t cut problem in unweighted, undirected graphs. Our oracle model is inspired by the submodular function minimization problem: on query … We give query-efficient algorithms for the global min-cut and the s-t cut problem in unweighted, undirected graphs. Our oracle model is inspired by the submodular function minimization problem: on query $S \subset V$, the oracle returns the size of the cut between $S$ and $V \setminus S$. We provide algorithms computing an exact minimum $s$-$t$ cut in $G$ with $\tilde{O}(n^{5/3})$ queries, and computing an exact global minimum cut of $G$ with only $\tilde{O}(n)$ queries (while learning the graph requires $\tildeΘ(n^2)$ queries).
We consider the problem of implementing a fixed social choice function between multiple players (which takes as input a type ti from each player i and outputs an outcome f(t1,…, … We consider the problem of implementing a fixed social choice function between multiple players (which takes as input a type ti from each player i and outputs an outcome f(t1,…, tn)), in which each player must be incentivized to follow the protocol. In particular, we study the communication requirements of a protocol which: (a) implements f, (b) implements f and computes payments that make it ex-post incentive compatible (EPIC) to follow the protocol, and (c) implements f and computes payments in a way that makes it dominant-strategy incentive compatible (DSIC) to follow the protocol.
Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. … Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. For instance, an algorithm of [Behnezhad; FOCS'21] obtains a 1/2-approximation in O(n) time for n-vertex graphs. A more recent algorithm by [Behnezhad, Roghani, Rubinstein, and Saberi; SODA'23] obtains a slightly-better-than-1/2 approximation in O(n1+є) time (for arbitrarily small constant ε>0). On the lower bound side, [Parnas and Ron; TCS'07] showed 15 years ago that obtaining any constant approximation of maximum matching size requires Ω(n) time. Proving any super-linear in n lower bound, even for (1−є)-approximations, has remained elusive since then.
An Exponentially Faster Algorithm for Submodular Maximization Under a Matroid Constraint This paper studies the problem of submodular maximization under a matroid constraint. It is known since the 1970s that … An Exponentially Faster Algorithm for Submodular Maximization Under a Matroid Constraint This paper studies the problem of submodular maximization under a matroid constraint. It is known since the 1970s that the greedy algorithm obtains a constant-factor approximation guarantee for this problem. Twelve years ago, a breakthrough result by Vondrák obtained the optimal 1 − 1/e approximation. Previous algorithms for this fundamental problem all have linear parallel runtime, which was considered impossible to accelerate until recently. The main contribution of this paper is a novel algorithm that provides an exponential speedup in the parallel runtime of submodular maximization under a matroid constraint, without loss in the approximation guarantee.
We prove that finding an $\epsilon$-approximate Nash equilibrium is PPAD-complete for constant $\epsilon$ and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has … We prove that finding an $\epsilon$-approximate Nash equilibrium is PPAD-complete for constant $\epsilon$ and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has only two actions. As corollaries, we also prove similar inapproximability results for Bayesian Nash equilibrium in a two-player incomplete information game with a constant number of actions, for relative $\epsilon$-Well Supported Nash Equilibrium in a two-player game, for market equilibrium in a non-monotone market, for the generalized circuit problem defined by Chen, Deng, and Teng [CDT'09], and for approximate competitive equilibrium from equal incomes with indivisible goods.
We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art … We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art running time of n^O(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best k^o(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis.
We show that, if truth assignments on n variables reproduce through recombination so that satisfaction of a particular Boolean function confers a small evolutionary advantage, then a polynomially large population … We show that, if truth assignments on n variables reproduce through recombination so that satisfaction of a particular Boolean function confers a small evolutionary advantage, then a polynomially large population over polynomially many generations (polynomial in n and the inverse of the initial satisfaction probability) will end up almost certainly consisting exclusively of satisfying truth assignments. We argue that this theorem sheds light on the problem of the evolution of complex adaptations.
We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid … We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid of girth $\geq k$ and a prophet inequality instance on that matroid whose optimal competitive ratio is $\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a prophet inequality with competitive ratio $1-O(\sqrt{\frac{\log k}{k}})$ for any $k$-fold matroid union. Our prophet inequality follows from an online contention resolution scheme. The key technical ingredient in our online contention resolution scheme is a novel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz functions over independent items which may be of independent interest. Applied to our particular setting, our bicriterion concentration inequality yields "Chernoff-strength" concentration for a $1$-Lipschitz function that is not (approximately) self-bounding.
Given a so called ''Sperner coloring'' of a triangulation of the $D$-dimensional simplex, Sperner's lemma guarantees the existence of a rainbow simplex, i.e. a simplex colored by all $D+1$ colors. … Given a so called ''Sperner coloring'' of a triangulation of the $D$-dimensional simplex, Sperner's lemma guarantees the existence of a rainbow simplex, i.e. a simplex colored by all $D+1$ colors. However, finding a rainbow simplex was the first problem to be proven $\mathsf{PPAD}$-complete in Papadimitriou's classical paper introducing the class $\mathsf{PPAD}$ (1994). In this paper, we prove that the problem does not become easier if we relax ''all $D+1$ colors'' to allow some fraction of missing colors: in fact, for any constant $D$, finding even a simplex with just three colors remains $\mathsf{PPAD}$-complete! Our result has an interesting application for the envy-free cake cutting from fair division. It is known that if agents value pieces of cake using general continuous functions satisfying a simple boundary condition (''a non-empty piece is better than an empty piece of cake''), there exists an envy-free allocation with connected pieces. We show that for any constant number of agents it is $\mathsf{PPAD}$-complete to find an allocation -- even using any constant number of possibly disconnected pieces -- that makes just three agents envy-free. Our results extend to super-constant dimension, number of agents, and number of pieces, as long as they are asymptotically bounded by any $\log^{1-\Omega(1)}(\epsilon)$, where $\epsilon$ is the precision parameter (side length for Sperner and approximate envy-free for cake cutting).
We show how to use parallelization to speed up sampling from an arbitrary distribution $\mu$ on a product space $[q]^n$, given oracle access to counting queries: $\mathbb{P}_{X\sim \mu}[X_S=\sigma_S]$ for any … We show how to use parallelization to speed up sampling from an arbitrary distribution $\mu$ on a product space $[q]^n$, given oracle access to counting queries: $\mathbb{P}_{X\sim \mu}[X_S=\sigma_S]$ for any $S\subseteq [n]$ and $\sigma_S \in [q]^S$. Our algorithm takes $O({n^{2/3}\cdot \operatorname{polylog}(n,q)})$ parallel time, to the best of our knowledge, the first sublinear in $n$ runtime for arbitrary distributions. Our results have implications for sampling in autoregressive models. Our algorithm directly works with an equivalent oracle that answers conditional marginal queries $\mathbb{P}_{X\sim \mu}[X_i=\sigma_i\;\vert\; X_S=\sigma_S]$, whose role is played by a trained neural network in autoregressive models. This suggests a roughly $n^{1/3}$-factor speedup is possible for sampling in any-order autoregressive models. We complement our positive result by showing a lower bound of $\widetilde{\Omega}(n^{1/3})$ for the runtime of any parallel sampling algorithm making at most $\operatorname{poly}(n)$ queries to the counting oracle, even for $q=2$.
We study algorithms for estimating the size of maximum matching. This problem has been subject to extensive research. For $n$-vertex graphs, Bhattacharya, Kiss, and Saranurak [FOCS'23] (BKS) showed that an … We study algorithms for estimating the size of maximum matching. This problem has been subject to extensive research. For $n$-vertex graphs, Bhattacharya, Kiss, and Saranurak [FOCS'23] (BKS) showed that an estimate that is within $\varepsilon n$ of the optimal solution can be achieved in $n^{2-\Omega_\varepsilon(1)}$ time, where $n$ is the number of vertices. While this is subquadratic in $n$ for any fixed $\varepsilon > 0$, it gets closer and closer to the trivial $\Theta(n^2)$ time algorithm that reads the entire input as $\varepsilon$ is made smaller and smaller. In this work, we close this gap and show that the algorithm of BKS is close to optimal. In particular, we prove that for any fixed $\delta > 0$, there is another fixed $\varepsilon = \varepsilon(\delta) > 0$ such that estimating the size of maximum matching within an additive error of $\varepsilon n$ requires $\Omega(n^{2-\delta})$ time in the adjacency list model.
We study algorithms for estimating the size of maximum matching. This problem has been subject to extensive research. For n-vertex graphs, Bhattacharya, Kiss, and Saranurak [FOCS'23] (BKS) showed that an … We study algorithms for estimating the size of maximum matching. This problem has been subject to extensive research. For n-vertex graphs, Bhattacharya, Kiss, and Saranurak [FOCS'23] (BKS) showed that an estimate that is within є n of the optimal solution can be achieved in n2−Ωє(1) time, where n is the number of vertices. While this is subquadratic in n for any fixed є > 0, it gets closer and closer to the trivial Θ(n2) time algorithm that reads the entire input as є is made smaller and smaller. In this work, we close this gap and show that the algorithm of BKS is close to optimal. In particular, we prove that for any fixed δ > 0, there is another fixed є = є(δ) > 0 such that estimating the size of maximum matching within an additive error of є n requires Ω(n2−δ) time in the adjacency list model.
We present a constant-factor approximation algorithm for the Nash Social Welfare (NSW) maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a framework for NSW optimization … We present a constant-factor approximation algorithm for the Nash Social Welfare (NSW) maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a framework for NSW optimization which assumes two subroutines which (1) solve a configuration-type LP under certain additional conditions, and (2) round the fractional solution with respect to utilitarian social welfare. In particular, a constant-factor approximation for submodular valuations with value queries can also be derived from our framework.
We show how to use parallelization to speed up sampling from an arbitrary distribution µ on a product space [q]n, given oracle access to counting queries: ℙX∼ µ[XS=σS] for any … We show how to use parallelization to speed up sampling from an arbitrary distribution µ on a product space [q]n, given oracle access to counting queries: ℙX∼ µ[XS=σS] for any S⊆ [n] and σS ∈ [q]S. Our algorithm takes O(n2/3· polylog(n,q)) parallel time, to the best of our knowledge, the first sublinear in n runtime for arbitrary distributions. Our results have implications for sampling in autoregressive models. Our algorithm directly works with an equivalent oracle that answers conditional marginal queries ℙX∼ µ[Xi=σi | XS=σS], whose role is played by a trained neural network in autoregressive models. This suggests a roughly n1/3-factor speedup is possible for sampling in any-order autoregressive models. We complement our positive result by showing a lower bound of Ω(n1/3) for the runtime of any parallel sampling algorithm making at most poly(n) queries to the counting oracle, even for q=2.
We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and ε > 0, our algorithm … We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and ε > 0, our algorithm estimates the cost of matching all but O(ε)-fraction of the vertices in truly subquadratic time O(n2−δ(ε)). Our algorithm has a natural interpretation for computing the Earth Mover's Distance (EMD), up to a ε-additive approximation. Notably, we make no assumptions about the underlying metric (more generally, the costs do not have to satisfy triangle inequality). Note that compared to the size of the instance (an arbitrary n × n cost matrix), our algorithm runs in sublinear time. Our algorithm can approximate a slightly more general problem: max-cardinality bipartite matching with a knapsack constraint, where the goal is to maximize the number of vertices that can be matched up to a total cost B.
We give a simple and computationally efficient algorithm that, for any constant ε>0, obtains ε T-swap regret within only T = (n) rounds; this is an exponential improvement compared to … We give a simple and computationally efficient algorithm that, for any constant ε>0, obtains ε T-swap regret within only T = (n) rounds; this is an exponential improvement compared to the super-linear number of rounds required by the state-of-the-art algorithm, and resolves the main open problem of ‍[]. Our algorithm has an exponential dependence on ε, but we prove a new, matching lower bound. Our algorithm for swap regret implies faster convergence to ε-Correlated Equilibrium (ε-CE) in several regimes: For normal form two-player games with n actions, it implies the first uncoupled dynamics that converges to the set of ε-CE in polylogarithmic rounds; a (n)-bit communication protocol for ε-CE in two-player games (resolving an open problem mentioned by ‍[, , ]); and an Õ(n)-query algorithm for ε-CE (resolving an open problem of ‍[] and obtaining the first separation between ε-CE and ε-Nash equilibrium in the query complexity model). For extensive-form games, our algorithm implies a PTAS for normal form correlated equilibria, a solution concept often conjectured to be computationally intractable (e.g. ‍[, ]).
We study the iteration complexity of decentralized learning of approximate correlated equilibria in incomplete information games. On the negative side, we prove that in $\mathit{extensive}$-$\mathit{form}$ $\mathit{games}$, assuming $\mathsf{PPAD} \not\subset \mathsf{TIME}(n^{\mathsf{polylog}(n)})$, … We study the iteration complexity of decentralized learning of approximate correlated equilibria in incomplete information games. On the negative side, we prove that in $\mathit{extensive}$-$\mathit{form}$ $\mathit{games}$, assuming $\mathsf{PPAD} \not\subset \mathsf{TIME}(n^{\mathsf{polylog}(n)})$, any polynomial-time learning algorithms must take at least $2^{\log_2^{1-o(1)}(|\mathcal{I}|)}$ iterations to converge to the set of $\epsilon$-approximate correlated equilibrium, where $|\mathcal{I}|$ is the number of nodes in the game and $\epsilon > 0$ is an absolute constant. This nearly matches, up to the $o(1)$ term, the algorithms of [PR'24, DDFG'24] for learning $\epsilon$-approximate correlated equilibrium, and resolves an open question of Anagnostides, Kalavasis, Sandholm, and Zampetakis [AKSZ'24]. Our lower bound holds even for the easier solution concept of $\epsilon$-approximate $\mathit{coarse}$ correlated equilibrium On the positive side, we give uncoupled dynamics that reach $\epsilon$-approximate correlated equilibria of a $\mathit{Bayesian}$ $\mathit{game}$ in polylogarithmic iterations, without any dependence of the number of types. This demonstrates a separation between Bayesian games and extensive-form games.
Algorithmic contract design is a new frontier in the intersection of economics and computation, with combinatorial contracts being a core problem in this domain. A central model within combinatorial contracts … Algorithmic contract design is a new frontier in the intersection of economics and computation, with combinatorial contracts being a core problem in this domain. A central model within combinatorial contracts explores a setting where a principal delegates the execution of a task, which can either succeed or fail, to an agent. The agent can choose any subset among a given set of costly actions, where every subset is associated with a success probability. The principal incentivizes the agent through a contract that specifies the payment upon success of the task. A natural setting of interest is one with submodular success probabilities. It is known that finding the optimal contract for the principal is $\mathsf{NP}$-hard, but the hardness result is derived from the hardness of demand queries. A major open problem is whether the hardness arises solely from the hardness of demand queries, or if the complexity lies within the optimal contract problem itself. In other words: does the problem retain its hardness, even when provided access to a demand oracle? We resolve this question in the affirmative, showing that any algorithm that computes the optimal contract for submodular success probabilities requires an exponential number of demand queries, thus settling the query complexity problem.
We study repeated first-price auctions and general repeated Bayesian games between two players, where one player, the learner, employs a no-regret learning algorithm, and the other player, the optimizer, knowing … We study repeated first-price auctions and general repeated Bayesian games between two players, where one player, the learner, employs a no-regret learning algorithm, and the other player, the optimizer, knowing the learner's algorithm, strategizes to maximize its own utility. For a commonly used class of no-regret learning algorithms called mean-based algorithms, we show that (i) in standard (i.e., full-information) first-price auctions, the optimizer cannot get more than the Stackelberg utility -- a standard benchmark in the literature, but (ii) in Bayesian first-price auctions, there are instances where the optimizer can achieve much higher than the Stackelberg utility. On the other hand, Mansour et al. (2022) showed that a more sophisticated class of algorithms called no-polytope-swap-regret algorithms are sufficient to cap the optimizer's utility at the Stackelberg utility in any repeated Bayesian game (including Bayesian first-price auctions), and they pose the open question whether no-polytope-swap-regret algorithms are necessary to cap the optimizer's utility. For general Bayesian games, under a reasonable and necessary condition, we prove that no-polytope-swap-regret algorithms are indeed necessary to cap the optimizer's utility and thus answer their open question. For Bayesian first-price auctions, we give a simple improvement of the standard algorithm for minimizing the polytope swap regret by exploiting the structure of Bayesian first-price auctions.
Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible … Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible in truly subquadratic time. A special case of LCS wherein each character appears at most once in every string is equivalent to the longest increasing subsequence (LIS) problem which can be solved in quasilinear time. In this work, we present novel algorithms for approximating LCS in truly subquadratic time and LIS in truly sublinear time. Our approximation factors depend on the ratio of the optimal solution size to the input size. We denote this ratio by and obtain the following results for LCS and LIS without any prior knowledge of : a truly subquadratic time algorithm for LCS with approximation factor and a truly sublinear time algorithm for LIS with approximation factor . The triangle inequality was recently used by M. Boroujeni, S. Ehsani, M. Ghodsi, M. HajiAghayi, and S. Seddingham [Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, 2018, pp. 1170–1189] and D. Chakraborty, D. Das, E. Goldenberg, M. Koucky, and M. Saks [Proceedings of the 59th Annual IEEE Symposium on Foundations of Computer Science, 2018, pp. 979–990] to present new approximation algorithms for edit distance. Our techniques for LCS extend the notion of the triangle inequality to nonmetric settings.
In the envy-free cake-cutting problem we are given a resource, usually called a cake and represented as the $[0,1]$ interval, and a set of n agents with heterogeneous preferences over … In the envy-free cake-cutting problem we are given a resource, usually called a cake and represented as the $[0,1]$ interval, and a set of n agents with heterogeneous preferences over pieces of the cake. The goal is to divide the cake among the n agents such that no agent is envious of any other agent. Even under a very general preferences model, this fundamental fair division problem is known to always admit an exact solution where each agent obtains a connected piece of the cake; we study the complexity of finding an approximate solution, i.e., a connected $\varepsilon$-envy-free allocation. For monotone valuations of cake pieces, Deng, Qi, and Saberi (2012) gave an efficient (poly $(\log (1 / \varepsilon))$ queries) algorithm for three agents and posed the open problem of four (or more) monotone agents. Even for the special case of additive valuations, Bránzei and Nisan (2022) conjectured an $\Omega(1 / \varepsilon)$ lower bound on the number of queries for four agents. We provide the first efficient algorithm for finding a connected $\varepsilon$-envy-free allocation with four monotone agents. We also prove that as soon as valuations are allowed to be non-monotone, the problem becomes hard: it becomes PPAD-hard, requires poly $(1 / \varepsilon)$ queries in the black-box model, and even poly $(1 / \varepsilon)$ communication complexity. This constitutes, to the best of our knowledge, the first intractability result for any version of the cake-cutting problem in the communication complexity model.
In the experts problem, on each of T days, an agent needs to follow the advice of one of n "experts". After each day, the loss associated with each expert's … In the experts problem, on each of T days, an agent needs to follow the advice of one of n "experts". After each day, the loss associated with each expert's advice is revealed. A fundamental result in learning theory says that the agent can achieve vanishing regret, i.e. their cumulative loss is within $o(T)$ of the cumulative loss of the best-in-hindsight expert. Can the agent perform well without sufficient space to remember all the experts? We extend a nascent line of research on this question in two directions: 1) We give a new algorithm against the oblivious adversary, improving over the memory-regret tradeoff obtained by [PZ23], and nearly matching the lower bound of [SWXZ22]. 2) We also consider an adaptive adversary who can observe past experts chosen by the agent. In this setting we give both a new algorithm and a novel lower bound, proving that roughly $\sqrt{n}$ memory is both necessary and sufficient for obtaining $o(T)$ regret.
We study local computation algorithms (LCA) for maximum matching. An LCA does not return its output entirely, but reveals parts of it upon query. For matchings, each query is a … We study local computation algorithms (LCA) for maximum matching. An LCA does not return its output entirely, but reveals parts of it upon query. For matchings, each query is a vertex v; the LCA should return whether v is matched—and if so to which neighbor—while spending a small time per query. In this paper, we prove that any LCA that computes a matching that is at most an additive of $\epsilon n$ smaller than the maximum matching in n-vertex graphs of maximum degree $\Delta$ must take at least $\Delta^{\Omega(1 / \varepsilon)}$ time. This comes close to the existing upper bounds that take $(\Delta / \epsilon)^{O\left(1 / \epsilon^{2}\right)} \operatorname{polylog}(n)$ time. In terms of sublinear time algorithms, our techniques imply that any algorithm that estimates the size of maximum matching up to an additive error of $\epsilon n$ must take $\Delta^{\Omega(1 / \epsilon)}$ time. This negatively resolves a decade old open problem of the area (see Open Problem 39 of sublinear.info) on whether such estimates can be achieved in $\operatorname{poly}(\Delta / \epsilon)$ time.
Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in … Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in asymptotically large markets:
Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. … Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. For instance, an algorithm of [Behnezhad; FOCS'21] obtains a 1/2-approximation in O(n) time for n-vertex graphs. A more recent algorithm by [Behnezhad, Roghani, Rubinstein, and Saberi; SODA'23] obtains a slightly-better-than-1/2 approximation in O(n1+є) time (for arbitrarily small constant ε>0). On the lower bound side, [Parnas and Ron; TCS'07] showed 15 years ago that obtaining any constant approximation of maximum matching size requires Ω(n) time. Proving any super-linear in n lower bound, even for (1−є)-approximations, has remained elusive since then.
Dynamic algorithms come in three main flavors: incremental (insertions-only), decremental (deletions- only), or fully dynamic (both insertions and deletions). Fully dynamic is the holy grail of dynamic algorithm design; it … Dynamic algorithms come in three main flavors: incremental (insertions-only), decremental (deletions- only), or fully dynamic (both insertions and deletions). Fully dynamic is the holy grail of dynamic algorithm design; it is obviously more general than the other two, but is it strictly harder?
We study sublinear time algorithms for estimating the size of maximum matching in graphs. Our main result is a (½ + Ω(1))-approximation algorithm which can be implemented in O(n1+ε) time, … We study sublinear time algorithms for estimating the size of maximum matching in graphs. Our main result is a (½ + Ω(1))-approximation algorithm which can be implemented in O(n1+ε) time, where n is the number of vertices and the constant ε > 0 can be made arbitrarily small. The best known lower bound for the problem is Ω(n), which holds for any constant approximation.Existing algorithms either obtain the greedy bound of ½-approximation [Behnezhad FOCS'21], or require some assumption on the maximum degree to run in o(n2)-time [Yoshida, Yamamoto, and Ito STOC'09]. We improve over these by designing a less "adaptive" augmentation algorithm for maximum matching that might be of independent interest.
We study sublinear time algorithms for the traveling salesman problem (TSP). First, we focus on the closely related maximum path cover problem, which asks for a collection of vertex disjoint … We study sublinear time algorithms for the traveling salesman problem (TSP). First, we focus on the closely related maximum path cover problem, which asks for a collection of vertex disjoint paths that include the maximum number of edges. We show that for any fixed $\epsilon > 0$, there is an algorithm that $(1/2 - \epsilon)$-approximates the maximum path cover size of an $n$-vertex graph in $\widetilde{O}(n)$ time. This improves upon a $(3/8-\epsilon)$-approximate $\widetilde{O}(n \sqrt{n})$-time algorithm of Chen, Kannan, and Khanna [ICALP'20]. Equipped with our path cover algorithm, we give $\widetilde{O}(n)$ time algorithms that estimate the cost of graphic TSP and $(1, 2)$-TSP up to factors of $1.83$ and $(1.5+\epsilon)$, respectively. Our algorithm for graphic TSP improves over a $1.92$-approximate $\widetilde{O}(n)$ time algorithm due to [CHK ICALP'20, Behnezhad FOCS'21]. Our algorithm for $(1,2)$-TSP improves over a folklore $(1.75 + \epsilon)$-approximate $\widetilde{O}(n)$-time algorithm, as well as a $(1.625+\epsilon)$-approximate $\widetilde{O}(n\sqrt{n})$-time algorithm of [CHK ICALP'20]. Our analysis of the running time uses connections to parallel algorithms and is information-theoretically optimal up to poly log $n$ factors. Additionally, we show that our approximation guarantees for path cover and $(1,2)$-TSP hit a natural barrier: We show better approximations require better sublinear time algorithms for the well-studied maximum matching problem.
In the experts problem, on each of $T$ days, an agent needs to follow the advice of one of $n$ ``experts''. After each day, the loss associated with each expert's … In the experts problem, on each of $T$ days, an agent needs to follow the advice of one of $n$ ``experts''. After each day, the loss associated with each expert's advice is revealed. A fundamental result in learning theory says that the agent can achieve vanishing regret, i.e. their cumulative loss is within $o(T)$ of the cumulative loss of the best-in-hindsight expert. Can the agent perform well without sufficient space to remember all the experts? We extend a nascent line of research on this question in two directions: $\bullet$ We give a new algorithm against the oblivious adversary, improving over the memory-regret tradeoff obtained by [PZ23], and nearly matching the lower bound of [SWXZ22]. $\bullet$ We also consider an adaptive adversary who can observe past experts chosen by the agent. In this setting we give both a new algorithm and a novel lower bound, proving that roughly $\sqrt{n}$ memory is both necessary and sufficient for obtaining $o(T)$ regret.
Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in … Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in asymptotically large markets: 1. For incentives, the A-CEEI mechanism is Envy-Free-but-for-Tie-Breaking (EF-TB), which implies that it is Strategyproof-in-the-Large (SP-L). 2. From a computational perspective, computing the equilibrium solution is unfortunately a computationally intractable problem (in the worst-case, assuming $\textsf{PPAD}\ne \textsf{FP}$). We develop a new heuristic algorithm that outperforms the previous state-of-the-art by multiple orders of magnitude. This new, faster algorithm lets us perform experiments on real-world inputs for the first time. We discover that with real-world preferences, even in a realistic implementation that satisfies the EF-TB and SP-L properties, agents may have surprisingly simple and plausible deviations from truthful reporting of preferences. To this end, we propose a novel strengthening of EF-TB, which dramatically reduces the potential for strategic deviations from truthful reporting in our experiments. A (variant of) our algorithm is now in production: on real course allocation problems it is much faster, has zero clearing error, and has stronger incentive properties than the prior state-of-the-art implementation.
We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by … We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by solving a configuration-type LP and using a rounding procedure for (utilitarian) social welfare as a blackbox, which could be applicable to other variants of the problem.
We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and $\varepsilon > 0$, our algorithm … We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and $\varepsilon > 0$, our algorithm estimates the cost of matching all but $O(\varepsilon)$-fraction of the vertices in truly subquadratic time $O(n^{2-\delta(\varepsilon)})$. Our algorithm has a natural interpretation for computing the Earth Mover's Distance (EMD), up to a $\varepsilon$-additive approximation. Notably, we make no assumptions about the underlying metric (more generally, the costs do not have to satisfy triangle inequality). Note that compared to the size of the instance (an arbitrary $n \times n$ cost matrix), our algorithm runs in {\em sublinear} time. Our algorithm can approximate a slightly more general problem: max-cardinality bipartite matching with a knapsack constraint, where the goal is to maximize the number of vertices that can be matched up to a total cost $B$.
We give a simple and computationally efficient algorithm that, for any constant $\varepsilon>0$, obtains $\varepsilon T$-swap regret within only $T = \mathsf{polylog}(n)$ rounds; this is an exponential improvement compared to … We give a simple and computationally efficient algorithm that, for any constant $\varepsilon>0$, obtains $\varepsilon T$-swap regret within only $T = \mathsf{polylog}(n)$ rounds; this is an exponential improvement compared to the super-linear number of rounds required by the state-of-the-art algorithm, and resolves the main open problem of [Blum and Mansour 2007]. Our algorithm has an exponential dependence on $\varepsilon$, but we prove a new, matching lower bound. Our algorithm for swap regret implies faster convergence to $\varepsilon$-Correlated Equilibrium ($\varepsilon$-CE) in several regimes: For normal form two-player games with $n$ actions, it implies the first uncoupled dynamics that converges to the set of $\varepsilon$-CE in polylogarithmic rounds; a $\mathsf{polylog}(n)$-bit communication protocol for $\varepsilon$-CE in two-player games (resolving an open problem mentioned by [Babichenko-Rubinstein'2017, Goos-Rubinstein'2018, Ganor-CS'2018]); and an $\tilde{O}(n)$-query algorithm for $\varepsilon$-CE (resolving an open problem of [Babichenko'2020] and obtaining the first separation between $\varepsilon$-CE and $\varepsilon$-Nash equilibrium in the query complexity model). For extensive-form games, our algorithm implies a PTAS for $\mathit{normal}$ $\mathit{form}$ $\mathit{correlated}$ $\mathit{equilibria}$, a solution concept often conjectured to be computationally intractable (e.g. [Stengel-Forges'08, Fujii'23]).
In the envy-free cake-cutting problem we are given a resource, usually called a cake and represented as the $[0,1]$ interval, and a set of $n$ agents with heterogeneous preferences over … In the envy-free cake-cutting problem we are given a resource, usually called a cake and represented as the $[0,1]$ interval, and a set of $n$ agents with heterogeneous preferences over pieces of the cake. The goal is to divide the cake among the $n$ agents such that no agent is envious of any other agent. Even under a very general preferences model, this fundamental fair division problem is known to always admit an exact solution where each agent obtains a connected piece of the cake; we study the complexity of finding an approximate solution, i.e., a connected $\varepsilon$-envy-free allocation. For monotone valuations of cake pieces, Deng, Qi, and Saberi (2012) gave an efficient ($\textsf{poly}(\log(1/\varepsilon))$ queries) algorithm for three agents and posed the open problem of four (or more) monotone agents. Even for the special case of additive valuations, Brânzei and Nisan (2022) conjectured an $Ω(1/\varepsilon)$ lower bound on the number of queries for four agents. We provide the first efficient algorithm for finding a connected $\varepsilon$-envy-free allocation with four monotone agents. We also prove that as soon as valuations are allowed to be non-monotone, the problem becomes hard: it becomes PPAD-hard, requires $\textsf{poly}(1/\varepsilon)$ queries in the black-box model, and even $\textsf{poly}(1/\varepsilon)$ communication complexity. This constitutes, to the best of our knowledge, the first intractability result for any version of the cake-cutting problem in the communication complexity model.
We study local computation algorithms (LCA) for maximum matching. An LCA does not return its output entirely, but reveals parts of it upon query. For matchings, each query is a … We study local computation algorithms (LCA) for maximum matching. An LCA does not return its output entirely, but reveals parts of it upon query. For matchings, each query is a vertex $v$; the LCA should return whether $v$ is matched -- and if so to which neighbor -- while spending a small time per query. In this paper, we prove that any LCA that computes a matching that is at most an additive of $\epsilon n$ smaller than the maximum matching in $n$-vertex graphs of maximum degree $\Delta$ must take at least $\Delta^{\Omega(1/\varepsilon)}$ time. This comes close to the existing upper bounds that take $(\Delta/\epsilon)^{O(1/\epsilon^2)} polylog(n)$ time. In terms of sublinear time algorithms, our techniques imply that any algorithm that estimates the size of maximum matching up to an additive error of $\epsilon n$ must take $\Delta^{\Omega(1/\epsilon)}$ time. This negatively resolves a decade old open problem of the area (see Open Problem 39 of sublinear.info) on whether such estimates can be achieved in $poly(\Delta/\epsilon)$ time.
We study the fundamental, classical mechanism design problem of single-buyer multi-item Bayesian revenue-maximizing auctions under the lens of communication complexity between the buyer and the seller. Specifically, we ask whether … We study the fundamental, classical mechanism design problem of single-buyer multi-item Bayesian revenue-maximizing auctions under the lens of communication complexity between the buyer and the seller. Specifically, we ask whether using quantum communication can be more efficient than classical communication. We have two sets of results, revealing a surprisingly rich landscape - which looks quite different from both quantum communication in non-strategic parties, and classical communication in mechanism design. We first study the expected communication complexity of approximately optimal auctions. We give quantum auction protocols for buyers with unit-demand or combinatorial valuations that obtain an arbitrarily good approximation of the optimal revenue while running in exponentially more efficient communication compared to classical approximately optimal auctions. However, these auctions come with the caveat that they may require the seller to charge exponentially large payments from a deviating buyer. We show that this caveat is necessary - we give an exponential lower bound on the product of the expected quantum communication and the maximum payment. We then study the worst-case communication complexity of exactly optimal auctions in an extremely simple setting: additive buyer's valuations over two items. We show the following separations: 1. There exists a prior where the optimal classical auction protocol requires infinitely many bits, but a one-way message of 1 qubit and 2 classical bits suffices. 2. There exists a prior where no finite one-way quantum auction protocol can obtain the optimal revenue. However, there is a barely-interactive revenue-optimal quantum auction protocol. 3. There exists a prior where no multi-round quantum auction protocol with a finite bound on communication complexity can obtain the optimal revenue.
In this article, we consider the following question: Can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework … In this article, we consider the following question: Can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples ( OPS ). In OPS , we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions that are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature.
In this work we give two new algorithms that use similar techniques for (non-monotone) submodular function maximization subject to a cardinality constraint. The first is an offline fixed parameter tractable … In this work we give two new algorithms that use similar techniques for (non-monotone) submodular function maximization subject to a cardinality constraint. The first is an offline fixed parameter tractable algorithm that guarantees a $0.539$-approximation for all non-negative submodular functions. The second algorithm works in the random-order streaming model. It guarantees a $(1/2+c)$-approximation for symmetric functions, and we complement it by showing that no space-efficient algorithm can beat $1/2$ for asymmetric functions. To the best of our knowledge this is the first provable separation between symmetric and asymmetric submodular function maximization.
We study sublinear time algorithms for estimating the size of maximum matching in graphs. Our main result is a $(\frac{1}{2}+\Omega(1))$-approximation algorithm which can be implemented in $O(n^{1+\epsilon})$ time, where $n$ … We study sublinear time algorithms for estimating the size of maximum matching in graphs. Our main result is a $(\frac{1}{2}+\Omega(1))$-approximation algorithm which can be implemented in $O(n^{1+\epsilon})$ time, where $n$ is the number of vertices and the constant $\epsilon > 0$ can be made arbitrarily small. The best known lower bound for the problem is $\Omega(n)$, which holds for any constant approximation. Existing algorithms either obtain the greedy bound of $\frac{1}{2}$-approximation [Behnezhad FOCS'21], or require some assumption on the maximum degree to run in $o(n^2)$-time [Yoshida, Yamamoto, and Ito STOC'09]. We improve over these by designing a less "adaptive" augmentation algorithm for maximum matching that might be of independent interest.
Dynamic algorithms come in three main flavors: $\mathit{incremental}$ (insertions-only), $\mathit{decremental}$ (deletions-only), or $\mathit{fully}$ $\mathit{dynamic}$ (both insertions and deletions). Fully dynamic is the holy grail of dynamic algorithm design; it is … Dynamic algorithms come in three main flavors: $\mathit{incremental}$ (insertions-only), $\mathit{decremental}$ (deletions-only), or $\mathit{fully}$ $\mathit{dynamic}$ (both insertions and deletions). Fully dynamic is the holy grail of dynamic algorithm design; it is obviously more general than the other two, but is it strictly harder? Several works managed to reduce fully dynamic to the incremental or decremental models by taking advantage of either specific structure of the incremental/decremental algorithms (e.g. [HK99, HLT01, BKS12, ADKKP16, BS80, OL81, OvL81]), or specific order of insertions/deletions (e.g. [AW14,HKNS15,KPP16]). Our goal in this work is to get a black-box fully-to-incremental reduction that is as general as possible. We find that the following conditions are necessary: $\bullet$ The incremental algorithm must have a worst-case (rather than amortized) running time guarantee. $\bullet$ The reduction must work in what we call the $\mathit{deletions}$-$\mathit{look}$-$\mathit{ahead}$ $\mathit{model}$, where the order of deletions among current elements is known in advance. A notable practical example is the "sliding window" (FIFO) order of updates. Under those conditions, we design: $\bullet$ A simple, practical, amortized-fully-dynamic to worst-case-incremental reduction with a $\log(T)$-factor overhead on the running time, where $T$ is the total number of updates. $\bullet$ A theoretical worst-case-fully-dynamic to worst-case-incremental reduction with a $\mathsf{polylog}(T)$-factor overhead on the running time.
Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. … Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. For instance, an algorithm of Behnezhad [FOCS'21] obtains a 1/2-approximation in $\tilde{O}(n)$ time for $n$-vertex graphs. A more recent algorithm by Behnezhad, Roghani, Rubinstein, and Saberi [SODA'23] obtains a slightly-better-than-1/2 approximation in $O(n^{1+\epsilon})$ time. On the lower bound side, Parnas and Ron [TCS'07] showed 15 years ago that obtaining any constant approximation of maximum matching size requires $\Omega(n)$ time. Proving any super-linear in $n$ lower bound, even for $(1-\epsilon)$-approximations, has remained elusive since then. In this paper, we prove the first super-linear in $n$ lower bound for this problem. We show that at least $n^{1.2 - o(1)}$ queries in the adjacency list model are needed for obtaining a $(\frac{2}{3} + \Omega(1))$-approximation of maximum matching size. This holds even if the graph is bipartite and is promised to have a matching of size $\Theta(n)$. Our lower bound argument builds on techniques such as correlation decay that to our knowledge have not been used before in proving sublinear time lower bounds. We complement our lower bound by presenting two algorithms that run in strongly sublinear time of $n^{2-\Omega(1)}$. The first algorithm achieves a $(\frac{2}{3}-\epsilon)$-approximation; this significantly improves prior close-to-1/2 approximations. Our second algorithm obtains an even better approximation factor of $(\frac{2}{3}+\Omega(1))$ for bipartite graphs. This breaks the prevalent $2/3$-approximation barrier and importantly shows that our $n^{1.2-o(1)}$ time lower bound for $(\frac{2}{3}+\Omega(1))$-approximations cannot be improved all the way to $n^{2-o(1)}$.
Motivated by large-market applications such as crowdsourcing, we revisit the problem of budget-feasible mechanism design under a "small-bidder assumption". Anari, Goel, and Nikzad (2018) gave a mechanism that has optimal … Motivated by large-market applications such as crowdsourcing, we revisit the problem of budget-feasible mechanism design under a "small-bidder assumption". Anari, Goel, and Nikzad (2018) gave a mechanism that has optimal competitive ratio $1-1/e$ on worst-case instances. However, we observe that on many realistic instances, their mechanism is significantly outperformed by a simpler open clock auction by Ensthaler and Giebe (2014), although the open clock auction only achieves competitive ratio $1/2$ in the worst case. Is there a mechanism that gets the best of both worlds, i.e., a mechanism that is worst-case optimal and performs favorably on realistic instances? Our first main result is the design and the analysis of a natural mechanism that gives an affirmative answer to our question above: (i) We prove that on every instance, our mechanism performs at least as good as all uniform mechanisms, including Anari, Goel, and Nikzad's and Ensthaler and Giebe's mechanisms. (ii) Moreover, we empirically evaluate our mechanism on various realistic instances and observe that it beats the worst-case $1-1/e$ competitive ratio by a large margin and compares favorably to both mechanisms mentioned above. Our second main result is more interesting in theory: We show that in the semi-adversarial model of budget-smoothed analysis, where the adversary designs a single worst-case market for a distribution of budgets, our mechanism is optimal among all (including non-uniform) mechanisms; furthermore our mechanism guarantees a strictly better-than-$(1-1/e)$ expected competitive ratio for any non-trivial budget distribution regardless of the market. We complement the positive result with a characterization of the worst-case markets for any given budget distribution and prove a fairly robust hardness result that holds against any budget distribution and any mechanism.
An Exponentially Faster Algorithm for Submodular Maximization Under a Matroid Constraint This paper studies the problem of submodular maximization under a matroid constraint. It is known since the 1970s that … An Exponentially Faster Algorithm for Submodular Maximization Under a Matroid Constraint This paper studies the problem of submodular maximization under a matroid constraint. It is known since the 1970s that the greedy algorithm obtains a constant-factor approximation guarantee for this problem. Twelve years ago, a breakthrough result by Vondrák obtained the optimal 1 − 1/e approximation. Previous algorithms for this fundamental problem all have linear parallel runtime, which was considered impossible to accelerate until recently. The main contribution of this paper is a novel algorithm that provides an exponential speedup in the parallel runtime of submodular maximization under a matroid constraint, without loss in the approximation guarantee.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 31 January 2019Accepted: 29 July 2021Published online: 30 November 2021Keywordscommunication complexity, game theory, … Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 31 January 2019Accepted: 29 July 2021Published online: 30 November 2021Keywordscommunication complexity, game theory, Nash equilibriumAMS Subject Headings68Q25, 91A05Publication DataISSN (print): 0097-5397ISSN (online): 1095-7111Publisher: Society for Industrial and Applied MathematicsCODEN: smjcat
We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a … We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a single-pass $(1-1/e-\varepsilon)$-approximation algorithm using only linear memory, but their exponential dependence on $\varepsilon$ makes it impractical even for $\varepsilon=0.1$. We simplify both the algorithm and the analysis, obtaining an exponential improvement in the $\varepsilon$-dependence (in particular, $O(k/\varepsilon)$ memory). Extending these techniques, we also give a simple $(1/e-\varepsilon)$-approximation for non-monotone functions in $O(k/\varepsilon)$ memory. For the monotone case, we also give a corresponding unconditional hardness barrier of $1-1/e+\varepsilon$ for single-pass algorithms in randomly ordered streams, even assuming unlimited computation. Finally, we show that the algorithms are simple to implement and work well on real world datasets.
We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a … We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a single-pass $(1-1/e-\varepsilon)$-approximation algorithm using only linear memory, but their exponential dependence on $\varepsilon$ makes it impractical even for $\varepsilon=0.1$. We simplify both the algorithm and the analysis, obtaining an exponential improvement in the $\varepsilon$-dependence (in particular, $O(k/\varepsilon)$ memory). Extending these techniques, we also give a simple $(1/e-\varepsilon)$-approximation for non-monotone functions in $O(k/\varepsilon)$ memory. For the monotone case, we also give a corresponding unconditional hardness barrier of $1-1/e+\varepsilon$ for single-pass algorithms in randomly ordered streams, even assuming unlimited computation. Finally, we show that the algorithms are simple to implement and work well on real world datasets.
We study edit distance computation with preprocessing: the preprocessing algorithm acts on each string separately, and then the query algorithm takes as input the two preprocessed strings. This model is … We study edit distance computation with preprocessing: the preprocessing algorithm acts on each string separately, and then the query algorithm takes as input the two preprocessed strings. This model is inspired by scenarios where we would like to compute edit distance between many pairs in the same pool of strings. Our results include: Permutation-LCS: If the LCS between two permutations has length $n-k$, we can compute it \textit{ exactly} with $O(n \log(n))$ preprocessing and $O(k \log(n))$ query time. Small edit distance: For general strings, if their edit distance is at most $k$, we can compute it \textit{ exactly} with $O(n\log(n))$ preprocessing and $O(k^2 \log(n))$ query time. Approximate edit distance: For the most general input, we can approximate the edit distance to within factor $(7+o(1))$ with preprocessing time $\tilde{O}(n^2)$ and query time $\tilde{O}(n^{1.5+o(1)})$. All of these results significantly improve over the state of the art in edit distance computation without preprocessing. Interestingly, by combining ideas from our algorithms with preprocessing, we provide new improved results for approximating edit distance without preprocessing in subquadratic time.
We consider (i) the problem of finding a (possibly mixed) Nash equilibrium in congestion games, and (ii) the problem of finding an (exponential precision) fixed point of the gradient descent … We consider (i) the problem of finding a (possibly mixed) Nash equilibrium in congestion games, and (ii) the problem of finding an (exponential precision) fixed point of the gradient descent dynamics of a smooth function f:[0,1]n → ℝ. We prove that these problems are equivalent. Our result holds for various explicit descriptions of f, ranging from (almost general) arithmetic circuits, to degree-5 polynomials. By a very recent result of [Fearnley et al., STOC 2021], this implies that these problems are PPAD ∩ PLS-complete. As a corollary, we also obtain the following equivalence of complexity classes:
We study the communication complexity of incentive compatible auction-protocols between a monopolist seller and a single buyer with a combinatorial valuation function over n items. Motivated by the fact that … We study the communication complexity of incentive compatible auction-protocols between a monopolist seller and a single buyer with a combinatorial valuation function over n items. Motivated by the fact that revenue-optimal auctions are randomized (as well as by an open problem of Babaioff, Gonczarowski, and Nisan), we focus on the randomized communication complexity of this problem (in contrast to most prior work on deterministic communication). We design simple, incentive compatible, and revenue-optimal auction-protocols whose expected communication complexity is much (in fact infinitely) more efficient than their deterministic counterparts. We also give nearly matching lower bounds on the expected communication complexity of approximately-revenue-optimal auctions. These results follow from a simple characterization of incentive compatible auction-protocols that allows us to prove lower bounds against randomized auction-protocols. In particular, our lower bounds give the first approximation-resistant, exponential separation between communication complexity of incentivizing vs implementing a Bayesian incentive compatible social choice rule, settling an open question of Fadel and Segal.
We consider the problem of implementing a fixed social choice function between multiple players (which takes as input a type ti from each player i and outputs an outcome f(t1,…, … We consider the problem of implementing a fixed social choice function between multiple players (which takes as input a type ti from each player i and outputs an outcome f(t1,…, tn)), in which each player must be incentivized to follow the protocol. In particular, we study the communication requirements of a protocol which: (a) implements f, (b) implements f and computes payments that make it ex-post incentive compatible (EPIC) to follow the protocol, and (c) implements f and computes payments in a way that makes it dominant-strategy incentive compatible (DSIC) to follow the protocol.
The greedy algorithm for monotone submodular function maximization subject to cardinality constraint is guaranteed to approximate the optimal solution to within a $1-1/e$ factor. Although it is well known that … The greedy algorithm for monotone submodular function maximization subject to cardinality constraint is guaranteed to approximate the optimal solution to within a $1-1/e$ factor. Although it is well known that this guarantee is essentially tight in the worst case -- for greedy and in fact any efficient algorithm, experiments show that greedy performs better in practice. We observe that for many applications in practice, the empirical distribution of the budgets (i.e., cardinality constraints) is supported on a wide range, and moreover, all the existing hardness results in theory break under a large perturbation of the budget. To understand the effect of the budget from both algorithmic and hardness perspectives, we introduce a new notion of budget smoothed analysis. We prove that greedy is optimal for every budget distribution, and we give a characterization for the worst-case submodular functions. Based on these results, we show that on the algorithmic side, under realistic budget distributions, greedy and related algorithms enjoy provably better approximation guarantees, that hold even for worst-case functions, and on the hardness side, there exist hard functions that are fairly robust to all the budget distributions.
We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art … We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art running time of n^O(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best k^o(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis.
We study the communication complexity of incentive compatible auction-protocols between a monopolist seller and a single buyer with a combinatorial valuation function over $n$ items. Motivated by the fact that … We study the communication complexity of incentive compatible auction-protocols between a monopolist seller and a single buyer with a combinatorial valuation function over $n$ items. Motivated by the fact that revenue-optimal auctions are randomized [Tha04,MV10,BCKW10,Pav11,HR15] (as well as by an open problem of Babaioff, Gonczarowski, and Nisan [BGN17]),we focus on the randomized communication complexity of this problem (in contrast to most prior work on deterministic communication). We design simple, incentive compatible, and revenue-optimal auction-protocols whose expected communication complexity is much (in fact infinitely) more efficient than their deterministic counterparts. We also give nearly matching lower bounds on the expected communication complexity of approximately-revenue-optimal auctions. These results follow from a simple characterization of incentive compatible auction-protocols that allows us to prove lower bounds against randomized auction-protocols. In particular, our lower bounds give the first approximation-resistant, exponential separation between communication complexity of incentivizing vs implementing a Bayesian incentive compatible social choice rule, settling an open question of Fadel and Segal [FS09].
We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a … We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a single-pass $(1-1/e-\varepsilon)$-approximation algorithm using only linear memory, but their exponential dependence on $\varepsilon$ makes it impractical even for $\varepsilon=0.1$. We simplify both the algorithm and the analysis, obtaining an exponential improvement in the $\varepsilon$-dependence (in particular, $O(k/\varepsilon)$ memory). Extending these techniques, we also give a simple $(1/e-\varepsilon)$-approximation for non-monotone functions in $O(k/\varepsilon)$ memory. For the monotone case, we also give a corresponding unconditional hardness barrier of $1-1/e+\varepsilon$ for single-pass algorithms in randomly ordered streams, even assuming unlimited computation. Finally, we show that the algorithms are simple to implement and work well on real world datasets.
Longest common subsequence ($\mathsf{LCS}$) is a classic and central problem in combinatorial optimization. While $\mathsf{LCS}$ admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible … Longest common subsequence ($\mathsf{LCS}$) is a classic and central problem in combinatorial optimization. While $\mathsf{LCS}$ admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible in truly subquadratic time. A special case of $\mathsf{LCS}$ wherein each character appears at most once in every string is equivalent to the longest increasing subsequence problem ($\mathsf{LIS}$) which can be solved in quasilinear time. In this work, we present novel algorithms for approximating $\mathsf{LCS}$ in truly subquadratic time and $\mathsf{LIS}$ in truly sublinear time. Our approximation factors depend on the ratio of the optimal solution size over the input size. We denote this ratio by $\lambda$ and obtain the following results for $\mathsf{LCS}$ and $\mathsf{LIS}$ without any prior knowledge of $\lambda$. $\bullet$ A truly subquadratic time algorithm for $\mathsf{LCS}$ with approximation factor $\Omega(\lambda^3)$. $\bullet$A truly sublinear time algorithm for $\mathsf{LIS}$ with approximation factor $\Omega(\lambda^3)$. Triangle inequality was recently used by [Boroujeni, Ehsani, Ghodsi, HajiAghayi and Seddighin SODA 2018] and [Charkraborty, Das, Goldenberg, Koucky and Saks FOCS 2018] to present new approximation algorithms for edit distance. Our techniques for $\mathsf{LCS}$ extend the notion of triangle inequality to non-metric settings.
We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou … We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis et al. [2006a] on the complexity of four-player Nash equilibria, settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of two-player Nash equilibria. In particular, we prove the following theorems: —Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. —The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: —Arrow-Debreu market equilibria are PPAD -hard to compute.
A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC'15) proves that … A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC'15) proves that computing the Edit Distance of two sequences of length n in truly subquadratic O(n2−ε) time, for some ε>0, is impossible under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS).
We prove that finding an ε-approximate Nash equilibrium is PPAD-complete for constant ε and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has … We prove that finding an ε-approximate Nash equilibrium is PPAD-complete for constant ε and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has only two actions.
The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. … The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time.
We prove that there exists a constant $\epsilon>0$ such that, assuming the Exponential Time Hypothesis for PPAD, computing an $\epsilon$-approximate Nash equilibrium in a two-player (nXn) game requires quasi-polynomial time, … We prove that there exists a constant $\epsilon>0$ such that, assuming the Exponential Time Hypothesis for PPAD, computing an $\epsilon$-approximate Nash equilibrium in a two-player (nXn) game requires quasi-polynomial time, $n^{\log^{1-o(1)} n}$. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [LMM03]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP); this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an $\epsilon$-approximate Nash equilibrium in a game with n players requires $2^{\Omega(n)}$ oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [HN13], Babichenko [Bab14], and Chen et al. [CCT15]. In fact, our results for n-player games are stronger: they hold with respect to the $(\epsilon,\delta)$-WeakNash relaxation recently introduced by Babichenko et al. [BPR16].
For a constant ϵ, we prove a (N) lower bound on the (randomized) communication complexity of ϵ-Nash equilibrium in two-player N x N games. For n-player binary-action games we prove … For a constant ϵ, we prove a (N) lower bound on the (randomized) communication complexity of ϵ-Nash equilibrium in two-player N x N games. For n-player binary-action games we prove an exp(n) lower bound for the (randomized) communication complexity of (ϵ,ϵ)-weak approximate Nash equilibrium, which is a profile of mixed actions such that at least (1-ϵ)-fraction of the players are ϵ-best replying.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
The edit distance between two strings is defined as the smallest number of insertions, deletions, and substitutions that need to be made to transform one of the strings to another … The edit distance between two strings is defined as the smallest number of insertions, deletions, and substitutions that need to be made to transform one of the strings to another one. Approximating edit distance in subquadratic time is "one of the biggest unsolved problems in the field of combinatorial pattern matching" [21]. Our main result is a quantum constant approximation algorithm for computing the edit distance in truly subquadratic time. More precisely, we give an O(n1.858) quantum algorithm that approximates the edit distance within a factor of 7. We further extend this result to an O(n1.781) quantum algorithm that approximates the edit distance within a larger constant factor.Our solutions are based on a framework for approximating edit distance in parallel settings. This framework requires as black box an algorithm that computes the distances of several smaller strings all at once. For a quantum algorithm, we reduce the black box to metric estimation and provide efficient algorithms for approximating it. We further show that this framework enables us to approximate edit distance in distributed settings. To this end, we provide a MapReduce algorithm to approximate edit distance within a factor of 3, with sublinearly many machines and sublinear memory. Also, our algorithm runs in a logarithmic number of rounds.
We present a near-linear time algorithm that approximates the edit distance between two strings within a polylogarithmic factor. For strings of length n and every fixed ε >; 0, the … We present a near-linear time algorithm that approximates the edit distance between two strings within a polylogarithmic factor. For strings of length n and every fixed ε >; 0, the algorithm computes a (log n) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">O(1/ε)</sup> approximation in n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1+ε</sup> time. This is an exponential improvement over the previously known approximation factor, 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Õ</sup> <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">(√log n)</sup> , with a comparable running time [Ostrovsky and Rabani, J. ACM 2007; Andoni and Onak, STOC 2009]. This result arises naturally in the study of a new asymmetric query model. In this model, the input consists of two strings x and y, and an algorithm can access y in an unrestricted manner, while being charged for querying every symbol of x. Indeed, we obtain our main result by designing an algorithm that makes a small number of queries in this model. We then provide a nearly-matching lower bound on the number of queries. Our lower bound is the first to expose hardness of edit distance stemming from the input strings being "repetitive", which means that many of their substrings are approximately identical. Consequently, our lower bound provides the first rigorous separation between edit distance and Ulam distance.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We prove an N <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2-o(1)</sup> lower bound on the randomized communication complexity of finding an ε-approximate Nash equilibrium (for constant ε>0) in a two-player N×N game. We prove an N <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2-o(1)</sup> lower bound on the randomized communication complexity of finding an ε-approximate Nash equilibrium (for constant ε>0) in a two-player N×N game.
We show how to compute the edit distance between two strings of length $n$ up to a factor of $2^{\tilde{O}(\sqrt{\log n})}$ in $n^{1+o(1)}$ time. This is the first subpolynomial approximation … We show how to compute the edit distance between two strings of length $n$ up to a factor of $2^{\tilde{O}(\sqrt{\log n})}$ in $n^{1+o(1)}$ time. This is the first subpolynomial approximation algorithm for this problem that runs in near-linear time, improving on the state-of-the-art $n^{1/3+o(1)}$ approximation. Previously, approximation of $2^{\tilde{O}(\sqrt{\log n})}$ was known only for embedding edit distance into $\ell_1$, and it is not known if that embedding can be computed in less than quadratic time.
We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n . Our main result states that for n -player binary-action … We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n . Our main result states that for n -player binary-action games and for constant ε, the query complexity of an ε-well-supported Nash equilibrium is exponential in n . As a consequence of this result, we get an exponential lower bound on the rate of convergence of adaptive dynamics to approximate Nash equilibria.
Edit distance is a measure of similarity of two strings based on the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. The … Edit distance is a measure of similarity of two strings based on the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. The edit distance can be computed exactly using a dynamic programming algorithm that runs in quadratic time. Andoni, Krauthgamer and Onak (2010) gave a nearly linear time algorithm that approximates edit distance within approximation factor poly(log n). In this paper, we provide an algorithm with running time Õ(n^2-2/7) that approximates the edit distance within a constant factor.
Classic similarity measures of strings are longest common subsequence and Levenshtein distance (i.e., The classic edit distance). A classic similarity measure of curves is dynamic time warping. These measures can … Classic similarity measures of strings are longest common subsequence and Levenshtein distance (i.e., The classic edit distance). A classic similarity measure of curves is dynamic time warping. These measures can be computed by simple O(n2) dynamic programming algorithms, and despite much effort no algorithms with significantly better running time are known. We prove that, even restricted to binary strings or one-dimensional curves, respectively, these measures do not have strongly sub quadratic time algorithms, i.e., No algorithms with running time O(n2 -- ε) for any ε > 0, unless the Strong Exponential Time Hypothesis fails. We generalize the result to edit distance for arbitrary fixed costs of the four operations (deletion in one of the two strings, matching, substitution), by identifying trivial cases that can be solved in constant time, and proving quadratic-time hardness on binary strings for all other cost choices. This improves and generalizes the known hardness result for Levenshtein distance [Backurs, Indyk STOC'15] by the restriction to binary strings and the generalization to arbitrary costs, and adds important problems to a recent line of research showing conditional lower bounds for a growing number of quadratic time problems. As our main technical contribution, we introduce a framework for proving quadratic-time hardness of similarity measures. To apply the framework it suffices to construct a single gadget, which encapsulates all the expressive power necessary to emulate a reduction from satisfiability. Finally, we prove quadratic-time hardness for longest palindromic subsequence and longest tandem subsequence via reductions from longest common subsequence, showing that conditional lower bounds based on the Strong Exponential Time Hypothesis also apply to string problems that are not necessarily similarity measures.
We prove that, assuming the exponential time hypothesis, finding an \epsilon-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time. This is tight by [CCDEHT'15] and resolves … We prove that, assuming the exponential time hypothesis, finding an \epsilon-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time. This is tight by [CCDEHT'15] and resolves an open question of [Dughmi'14]. We also prove that finding a multiplicative approximation is NP-hard.
If a class of games is known to have a Nash equilibrium with probability values that are either zero or Ω(1) -- and thus with support of bounded size -- … If a class of games is known to have a Nash equilibrium with probability values that are either zero or Ω(1) -- and thus with support of bounded size -- then obviously this equilibrium can be found exhaustively in polynomial time. Somewhat surprisingly, we show that there is a PTAS for the class of games whose equilibria are guaranteed to have small --- O(1/n) -- values, and therefore large -- Ω(n) -- supports. We also point out that there is a PTAS for games with sparse payoff matrices, a family for which the exact problem is known to be PPAD-complete [Chen, Deng, Teng 2006]. Both algorithms are of a special kind that we call oblivious: The algorithm just samples a fixed distribution on pairs of mixed strategies, and the game is only used to determine whether the sampled strategies comprise an ε-Nash equilibrium; the answer is "yes" with inverse polynomial probability (in the second case, the algorithm is actually deterministic). These results bring about the question: Is there an oblivious PTAS for finding a Nash equilibrium in general games? We answer this question in the negative; our lower bound comes close to the quasi-polynomial upper bound of [Lipton, Markakis, Mehta 2003]. Another recent PTAS for anonymous games [Daskalakis, Papadimitriou 2007 and 2008, Daskalakis 2008] is also oblivious in a weaker sense appropriate for this class of games (it samples from a fixed distribution on unordered collections of mixed strategies), but its running time is exponential in 1/ε. We prove that any oblivious PTAS for anonymous games with two strategies and three player types must have 1/εα in the exponent of the running time for some α ≥ 1/3, rendering the algorithm in [Daskalakis 2008] (which works with any bounded number of player types) essentially optimal within oblivious algorithms. In contrast, we devise a poly n • (1/ε)O(\log2(1/ε)) non-oblivious PTAS for anonymous games with two strategies and any bounded number of player types. The key idea of our algorithm is to search not over unordered sets of mixed strategies, but over a carefully crafted set of collections of the first O(log 1/ε) moments of the distribution of the number of players playing strategy 1 at equilibrium. The algorithm works because of a probabilistic result of more general interest that we prove: the total variation distance between two sums of independent indicator random variables decreases exponentially with the number of moments of the two sums that are equal, independent of the number of indicators.
We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n and a constant number of actions m. Our main result … We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n and a constant number of actions m. Our main result states that even for constant ε, the query complexity of an ε-well-supported Nash equilibrium is exponential in n.
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, … We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization … We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization [12], use menus of infinite size [9], and may be computationally intractable [8]. This has sparked recent interest in finding simple mechanisms that obtain reasonable approximations to the optimal revenue [10, 15, 3]. In this work we attempt to find the optimal simple mechanism.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Reducing approximate Longest Common Subsequence to approximate Edit DistanceAviad Rubinstein and Zhao SongAviad Rubinstein and … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Reducing approximate Longest Common Subsequence to approximate Edit DistanceAviad Rubinstein and Zhao SongAviad Rubinstein and Zhao Songpp.1591 - 1600Chapter DOI:https://doi.org/10.1137/1.9781611975994.98PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Given a pair of n-character strings, the problems of computing their Longest Common Subsequence and Edit Distance have been extensively studied for decades. For exact algorithms, LCS and Edit Distance (with character insertions and deletions) are equivalent; the state of the art running time is (almost) quadratic in n, and this is tight under plausible fine-grained complexity assumptions. But for approximation algorithms the picture is different: there is a long line of works with improved approximation factors for Edit Distance, but for LCS (with binary strings) only a trivial 1/2-approximation was known. In this work we give a reduction from approximate LCS to approximate Edit Distance, yielding the first efficient (1/2 + ϵ)-approximation algorithm for LCS for some constant ϵ > 0. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We study lower bounds on the query complexity of determining correlated equilibrium. In particular, we consider a query model in which an n -player game is specified via a black … We study lower bounds on the query complexity of determining correlated equilibrium. In particular, we consider a query model in which an n -player game is specified via a black box that returns players' utilities at pure action profiles. In this model, we establish that in order to compute a correlated equilibrium, any deterministic algorithm must query the black box an exponential (in n ) number of times.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the … We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the assumption that the buyers' values are additive subject to a matroid feasibility constraint and independent across items. Importantly, different buyers in our setting can have different constraints on the sets of items they desire. Our mechanism is a sequential variant of two-part tariffs. Prior to our work, simple approximation mechanisms for such multi-buyer problems were known only for the special cases of all unit-demand or all additive value buyers.
We prove that finding an epsilon-Nash equilibrium in a succinctly representable game with many players is PPAD-hard for constant epsilon. Our proof uses succinct games, i.e. games whose payoff function … We prove that finding an epsilon-Nash equilibrium in a succinctly representable game with many players is PPAD-hard for constant epsilon. Our proof uses succinct games, i.e. games whose payoff function is represented by a circuit. Our techniques build on a recent query complexity lower bound by Babichenko.
We show that the problem of finding an ε-approximate Nash equilibrium in an {anonymous} game with seven pure strategies is complete in PPAD, when the approximation parameter ε is exponentially … We show that the problem of finding an ε-approximate Nash equilibrium in an {anonymous} game with seven pure strategies is complete in PPAD, when the approximation parameter ε is exponentially small in the number of players.
We characterize the communication complexity of truthful mechanisms. Our departure point is the well known taxation principle. The taxation principle asserts that every truthful mechanism can be interpreted as follows: … We characterize the communication complexity of truthful mechanisms. Our departure point is the well known taxation principle. The taxation principle asserts that every truthful mechanism can be interpreted as follows: every player is presented with a menu that consists of a price for each bundle (the prices depend only on the valuations of the other players). Each player is allocated a bundle that maximizes his profit according to this menu. We define the taxation complexity of a truthful mechanism to be the logarithm of the maximum number of menus that may be presented to a player. Our main finding is that in general the taxation complexity essentially equals the communication complexity. The proof consists of two main steps. First, we prove that for rich enough domains the taxation complexity is at most the communication complexity. We then show that the taxation complexity is much smaller than the communication complexity only in "pathological" cases and provide a formal description of these extreme cases. Next, we study mechanisms that access the valuations via value queries only. In this setting we establish that the menu complexity - a notion that was already studied in several different contexts - characterizes the number of value queries that the mechanism makes in exactly the same way that the taxation complexity characterizes the communication complexity. Our approach yields several applications, including strengthening the solution concept with low communication overhead, fast computation of prices, and hardness of approximation by computationally efficient truthful mechanisms.
We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1 − … We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1 − e, requires nΩ(log n) time. Our result essentially matches the quasi-polynomial algorithms of Feige and Seltser [FS97] and Barman [Bar15] for this problem, and is the first one to rule out an additive PTAS for Densest k-Subgraph. We further strengthen this result by showing that our lower bound continues to hold when, in the soundness case, even subgraphs smaller by a near-polynomial factor (k′ = k · 2 −Ω(log n)) are assumed to be at most (1 − e)-dense.Our reduction is inspired by recent applications of the birthday technique [AIM14, BKW15]. Our analysis relies on information theoretical machinery and is similar in spirit to analyzing a parallel repetition of two-prover games in which the provers may choose to answer some challenges multiple times, while completely ignoring other challenges.
We prove that finding an $\epsilon$-approximate Nash equilibrium is $\mathsf{PPAD}$--complete for constant $\epsilon$ and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has … We prove that finding an $\epsilon$-approximate Nash equilibrium is $\mathsf{PPAD}$--complete for constant $\epsilon$ and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has only two actions. As corollaries, we also prove similar inapproximability results for Bayesian Nash equilibrium in a two-player incomplete information game with a constant number of actions, for relative $\epsilon$-well supported Nash equilibrium in a two-player game, for market equilibrium in a nonmonotone market, for the generalized circuit problem defined by Chen, Deng, and Teng [J. ACM, 56 (2009)], and for approximate competitive equilibrium from equal incomes with indivisible goods.
We show that the edit distance between two strings of length n can be computed via a randomized algorithm within a factor of f(є) in n 1+є time as long … We show that the edit distance between two strings of length n can be computed via a randomized algorithm within a factor of f(є) in n 1+є time as long as the edit distance is at least n 1−δ for some δ(є) > 0.
The rank of a bimatrix game (A, B) is defined as rank(A + B). Computing a Nash equilibrium (NE) of a rank-0, i.e., zero-sum game is equivalent to linear programming … The rank of a bimatrix game (A, B) is defined as rank(A + B). Computing a Nash equilibrium (NE) of a rank-0, i.e., zero-sum game is equivalent to linear programming (von Neumann'28, Dantzig'51). In 2005, Kannan and Theobald gave an FPTAS for constant rank games, and asked if there exists a polynomial time algorithm to compute an exact NE. Adsul et. al. (2011) answered this question affirmatively for rank-1 games, leaving rank-2 and beyond unresolved.
In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test … In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test by the smallest signal strength that it can detect and we propose a computationally efficient method based on semidefinite programming.We also prove that the statistical performance of this test cannot be strictly improved by any computationally efficient method. Our results can be viewed as complexity theoretic lower bounds conditionally on the assumptions that some instances of the planted clique problem cannot be solved in randomized polynomial time.
We study a communication variant of local search. There is some fixed, commonly known graph G. Alice holds fA and Bob holds fB, both are functions that specify a value … We study a communication variant of local search. There is some fixed, commonly known graph G. Alice holds fA and Bob holds fB, both are functions that specify a value for each vertex. The goal is to find a local maximum of fA+fB with respect to G, i.e., a vertex v for which (fA+fB)(v)≥ (fA+fB)(u) for each neighbor u of v.
We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived … We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived from the buyer's type distribution satisfies certain stochastic dominance conditions.This measure expresses the marginal change in the seller's revenue under marginal changes in the rent paid to subsets of buyer types.As a corollary, we characterize the optimality of grand-bundling mechanisms, strengthening several results in the literature, where only sufficient optimality conditions have been derived.As an application, we show that the optimal mechanism for n independent uniform items each supported on [c, c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for 2 items [Pav11].At the same time, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c, c + 1] is not a grand bundling mechanism.
Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible … Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible in truly subquadratic time. A special case of LCS wherein each character appears at most once in every string is equivalent to the longest increasing subsequence problem (LIS) which can be solved in quasilinear time. In this work, we present novel algorithms for approximating LCS in truly subquadratic time and LIS in truly sublinear time. Our approximation factors depend on the ratio of the optimal solution size over the input size. We denote this ratio by λ and obtain the following results for LCS and LIS without any prior knowledge of λ. • A truly subquadratic time algorithm for LCS with approximation factor O(λ^3). • A truly sublinear time algorithm for LIS with approximation factor O(λ^3). Triangle inequality was recently used by Boroujeni et al. [1] and Chakraborty et al.[2] to present new approximation algorithms for edit distance. Our techniques for LCS extend the notion of triangle inequality to non-metric settings.
We introduce the smoothed analysis of algorithms , which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the … We introduce the smoothed analysis of algorithms , which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of Gaussian perturbations.
The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory … The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [11], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning exactly the buyers' underlying distribution and run the revenue-maximizing auction tailored to this distribution.In this paper we study the Competition Complexity of dynamic auctions. Consider the following problem: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction? We prove that the Competition Complexity of dynamic auctions is at most 3n - and at least linear in n - even when the buyers' values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an α-approximation of the optimal revenue; we term this number the α-approximate Competition Complexity. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a -approximation. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first nontrivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages.A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of [14] to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.
We show that in the document exchange problem, where Alice holds x ϵ {0, 1} <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> and Bob holds y ϵ {0, 1} <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> , Alice … We show that in the document exchange problem, where Alice holds x ϵ {0, 1} <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> and Bob holds y ϵ {0, 1} <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> , Alice can send Bob a message of size O(K(log <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> K + log n)) bits such that Bob can recover x using the message and his input y if the edit distance between x and y is no more than K, and output "error" otherwise. Both the encoding and decoding can be done in time Õ(n + poly(K)). This result significantly improves on the previous communication bounds under polynomial encoding/decoding time. We also show that in the referee model, where Alice and Bob hold x and y respectively, they can compute sketches of x and y of sizes poly(K log n) bits (the encoding), and send to the referee, who can then compute the edit distance between x and y together with all the edit operations if the edit distance is no more than K, and output "error" otherwise (the decoding). To the best of our knowledge, this is the first result for sketching edit distance using poly(K log n) bits. Moreover, the encoding phase of our sketching algorithm can be performed by scanning the input string in one pass. Thus our sketching algorithm also implies the first streaming algorithm for computing edit distance and all the edits exactly using poly(K log n) bits of space.
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for … We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.
It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true: creating a randomized … It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true: creating a randomized truthful mechanism is essentially as easy as a single call to a monotone allocation rule. Our main result is a general procedure to take a monotone allocation rule for a single-parameter domain and transform it (via a black-box reduction) into a randomized mechanism that is truthful in expectation and individually rational for every realization. The mechanism implements the same outcome as the original allocation rule with probability arbitrarily close to 1, and requires evaluating that allocation rule only once. We also provide an extension of this result to multiparameter domains and cycle-monotone allocation rules, under mild star-convexity and nonnegativity hypotheses on the type space and allocation rule, respectively. Because our reduction is simple, versatile, and general, it has many applications to mechanism design problems in which re-evaluating the allocation rule is either burdensome or informationally impossible. Applying our result to the multiarmed bandit problem, we obtain truthful randomized mechanisms whose regret matches the information-theoretic lower bound up to logarithmic factors, even though prior work showed this is impossible for truthful deterministic mechanisms. We also present applications to offline mechanism design, showing that randomization can circumvent a communication complexity lower bound for deterministic payments computation, and that it can also be used to create truthful shortest path auctions that approximate the welfare of the VCG allocation arbitrarily well, while having the same running time complexity as Dijkstra's algorithm.
Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible … Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood.
We study classic cake-cutting problems, but in discrete models rather than using infinite-precision real values, specifically, focusing on their communication complexity. Using general discrete simulations of classical infinite-precision protocols (Robertson-Webb … We study classic cake-cutting problems, but in discrete models rather than using infinite-precision real values, specifically, focusing on their communication complexity. Using general discrete simulations of classical infinite-precision protocols (Robertson-Webb and moving-knife), we roughly partition the various fair-allocation problems into 3 classes: "easy" (constant number of rounds of logarithmic many bits), "medium" (poly-logarithmic total communication), and "hard". Our main technical result concerns two of the "medium" problems (perfect allocation for 2 players and equitable allocation for any number of players) which we prove are not in the "easy" class. Our main open problem is to separate the "hard" from the "medium" classes.
We study the randomized query complexity of approximate Nash equilibria (ANE) in large games. We prove that, for some constant $\epsilon>0$, any randomized oracle algorithm that computes an $\epsilon$-ANE in … We study the randomized query complexity of approximate Nash equilibria (ANE) in large games. We prove that, for some constant $\epsilon>0$, any randomized oracle algorithm that computes an $\epsilon$-ANE in a binary-action, $n$-player game must make $2^{\Omega(n/\log n)}$ payoff queries. For the stronger solution concept of well-supported Nash equilibria (WSNE), Babichenko previously gave an exponential $2^{\Omega(n)}$ lower bound for the randomized query complexity of $\epsilon$-WSNE, for some constant $\epsilon>0$; the same lower bound was shown to hold for $\epsilon$-ANE, but only when $\epsilon=O(1/n)$. Our result answers an open problem posed by Hart and Nisan and Babichenko and is very close to the trivial upper bound of $2^n$. Our proof relies on a generic reduction from the problem of finding an $\epsilon$-WSNE to the problem of finding an $\epsilon/(4\alpha)$-ANE, in large games with $\alpha$ actions, which might be of independent interest.
In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. … In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. In AIM, a set of content providers and consumers form a bipartite network while consumers also form their social network, and influence propagates from the content providers to consumers and among consumers in the social network following the independent cascade model. An advertiser needs to select a subset of seed content providers and a subset of seed consumers, such that the influence from the seed providers passing through the seed consumers could reach a large number of consumers in the social network in expectation.
We introduce the notion of non-monotone utilities, which covers a wide variety of utility functions in economic theory. We show that it is PPAD-hard to compute an approximate Arrow-Debreu market … We introduce the notion of non-monotone utilities, which covers a wide variety of utility functions in economic theory. We show that it is PPAD-hard to compute an approximate Arrow-Debreu market equilibrium in markets with linear and non-monotone utilities. Building on this result, we settle the long-standing open problem regarding the computation of an approximate Arrow-Debreu market equilibrium in markets with CES utilities, by proving that it is PPAD-complete when the Constant Elasticity of Substitution parameter, ρ, is any constant less than -1.