Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (146)

We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid … We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid of girth $\geq k$ and a prophet inequality instance on that matroid whose optimal competitive ratio is $\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a prophet inequality with competitive ratio $1-O(\sqrt{\frac{\log k}{k}})$ for any $k$-fold matroid union. Our prophet inequality follows from an online contention resolution scheme. The key technical ingredient in our online contention resolution scheme is a novel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz functions over independent items which may be of independent interest. Applied to our particular setting, our bicriterion concentration inequality yields "Chernoff-strength" concentration for a $1$-Lipschitz function that is not (approximately) self-bounding.
Transaction Fee Mechanism Design studies auctions run by untrusted miners for transaction inclusion in a blockchain. Under previously-considered desiderata, an auction is considered `good' if, informally-speaking, each party (i.e., the … Transaction Fee Mechanism Design studies auctions run by untrusted miners for transaction inclusion in a blockchain. Under previously-considered desiderata, an auction is considered `good' if, informally-speaking, each party (i.e., the miner, the users, and coalitions of both miners and users) has no incentive to deviate from the fixed and pre-determined protocol. In this paper, we propose a novel desideratum for transaction fee mechanisms. We say that a TFM is off-chain influence proof when the miner cannot achieve additional revenue by running a separate auction off-chain. While the previously-highlighted EIP-1559 is the gold-standard according to prior desiderata, we show that it does not satisfy off-chain influence proofness. Intuitively, this holds because a Bayesian revenue-maximizing miner can strictly increase profits by persuasively threatening to censor any bids that do not transfer a tip directly to the miner off-chain. On the other hand, we reconsider the Cryptographic (multi-party computation assisted) Second Price Auction mechanism, which is technically not `simple for miners' according to previous desiderata (since miners may wish to set a reserve by fabricating bids). We show that, in a slightly different model where the miner is allowed to set the reserve directly, this auction satisfies simplicity for users and miners, and off-chain influence proofness. Finally, we prove a strong impossibility result: no mechanism satisfies all previously-considered properties along with off-chain influence proofness, even with unlimited supply, and even after soliciting input from the miner.
We study the communication complexity of truthful combinatorial auctions, and in particular the case where valuations are either subadditive or single-minded, which we denote with $\mathsf{SubAdd}\cup\mathsf{SingleM}$. We show that for … We study the communication complexity of truthful combinatorial auctions, and in particular the case where valuations are either subadditive or single-minded, which we denote with $\mathsf{SubAdd}\cup\mathsf{SingleM}$. We show that for three bidders with valuations in $\mathsf{SubAdd}\cup\mathsf{SingleM}$, any deterministic truthful mechanism that achieves at least a $0.366$-approximation requires $\exp(m)$ communication. In contrast, a natural extension of [Fei09] yields a non-truthful $\mathrm{poly}(m)$-communication protocol that achieves a $\frac{1}{2}$-approximation, demonstrating a gap between the power of truthful mechanisms and non-truthful protocols for this problem. Our approach follows the taxation complexity framework laid out in [Dob16b], but applies this framework in a setting not encompassed by the techniques used in past work. In particular, the only successful prior application of this framework uses a reduction to simultaneous protocols which only applies for two bidders [AKSW20], whereas our three-player lower bounds are stronger than what can possibly arise from a two-player construction (since a trivial truthful auction guarantees a $\frac{1}{2}$-approximation for two players).
Cryptographic Self-Selection is a paradigm employed by modern Proof-of-Stake consensus protocols to select a block-proposing "leader."Algorand [Chen and Micali, 2019] proposes a canonical protocol, and Ferreira et al. [2022] establish … Cryptographic Self-Selection is a paradigm employed by modern Proof-of-Stake consensus protocols to select a block-proposing "leader."Algorand [Chen and Micali, 2019] proposes a canonical protocol, and Ferreira et al. [2022] establish bounds (, ) on the maximum fraction of rounds a strategic player can lead as a function of their stake and a network connectivity parameter .While both their lower and upper bounds are non-trivial, there is a substantial gap between them (for example, they establish (10%, 1) ∈ [10.08%, 21.12%]), leaving open the question of how significant of a concern these manipulations are.We develop computational methods to provably nail (, ) for any desired (, ) up to arbitrary precision, and implement our method on a wide range of parameters (for example, we confirm (10%, 1) ∈ [10.08%, 10.15%]).Methodologically, estimating (, ) can be phrased as estimating to high precision the value of a Markov Decision Process whose states are countably-long lists of real numbers.Our methodological contributions involve (a) reformulating the question instead as computing to high precision the expected value of a distribution that is a fixed-point of a non-linear sampling operator, and (b) provably bounding the error induced by various truncations and sampling estimations of this distribution (which appears intractable to solve in closed form).One technical challenge, for example, is that natural sampling-based estimates of the mean of our target distribution are not unbiased estimators, and therefore our methods necessarily go beyond claiming sufficiently-many samples to be close to the mean.CCS Concepts: • Theory of computation → Algorithmic game theory and mechanism design; • Security and privacy → Cryptanalysis and other attacks.
Transaction Fee Mechanism Design studies auctions run by untrusted miners for transaction inclusion in a blockchain.Under previously-considered desiderata, an auction is considered 'good' if, informally-speaking, each party (i.e., the miner, … Transaction Fee Mechanism Design studies auctions run by untrusted miners for transaction inclusion in a blockchain.Under previously-considered desiderata, an auction is considered 'good' if, informally-speaking, each party (i.e., the miner, the users, and coalitions of both miners and users) has no incentive to deviate from the fixed and pre-determined protocol.In other words, prior works posit that a 'good' auction should be 'simple for users', 'simple for miners', and 'resistant to collusion'.In this paper, we propose a novel desideratum for Transaction Fee Mechanisms.We say that a TFM is off-chain influence proof when the miner cannot achieve additional revenue by running a separate auction off-chain.While the previously-highlighted EIP-1559 is the goldstandard according to prior desiderata (satisfying simplicity for users and miners along with collusion resistance in the 'typical' case where supply is functionally unlimited), we show that it does not satisfy off-chain influence proofness.Intuitively, this holds because a Bayesian revenuemaximizing miner can strictly increase profits by persuasively threatening to censor any bids that do not transfer a tip directly to the miner off-chain.On the other hand, we reconsider the Cryptographic (multi-party computation assisted) Second Price Auction mechanism (Shi et al., 2023), which is technically not 'simple for miners' according to previous desiderata (since miners may wish to set a reserve by fabricating bids).We argue that the space of TFMs can be naturally expanded to solicit an input from the miner, for example by asking them to set the reserve price of the auction.We show that in this model, the Cryptographic Second Price Auction satisfies simplicity for users and miners, and off-chain influence proofness, since it allows a Bayesian miner to maximize their revenue by posting an optimal reserve price.Finally, we prove a strong impossibility result: no mechanism satisfies all previously-considered properties along with off-chain influence proofness, even with unlimited supply, and even after soliciting miner input.
We consider truthful combinatorial auctions with items M = [m] for sale to n bidders, where each bidder i has a private monotone valuation function vi: 2M → ℝ+. Among … We consider truthful combinatorial auctions with items M = [m] for sale to n bidders, where each bidder i has a private monotone valuation function vi: 2M → ℝ+. Among truthful mechanisms, maximal-in-range (MIR) mechanisms (sometimes called VCG-based) achieve the best-known approximation guarantees among all poly-communication deterministic truthful mechanisms in all previously-studied settings. Our work settles the communication complexity necessary to achieve any approximation guarantee via an MIR mechanism. Specifically: Let MIRSubMod(m, k) denote the best approximation guarantee achievable by an MIR mechanism using 2k communication between bidders with submodular valuations over m items. Then for all k = Ω(log(m)), MIRSubMod(m,k) = Ω(√m/(klog(m/k))). When we set k = Θ(log(m)), this improves the previous best lower bound for polynomial communication maximal-in-range mechanisms from Ω(m1/3/log2/3(m)) to Ω(√m/log(m)). Additionally, MIRSubMod(m, k) = O(√m/k). Moreover, our mechanism can be implemented with 2k simultaneous value queries and computation, and is optimal with respect to the value query and computational/succinct representation models. The mechanism also works for bidders with subadditive valuations. When k = Θ(log(m)), this improves the previous best approximation guarantee for polynomial communication maximal-in-range mechanisms from O(√m) to O(√m/log(m)). Let also MIRGen(m,k) denote the best approximation guarantee achievable by an MIR mechanism using 2k communication between bidders with general valuations over m items. Then for all k = Ω(log(m)), MIRGen(m, k) = Ω(m/k). When k = Θ(log(m)), this improves the previous best lower bound for polynomial communication maximal-in-range mechanisms from Ω(m/log2(m)) to Ω(m/log(m)). Additionally, MIRGen(m, k) = O(m/k). Moreover, our mechanism can be implemented with 2k simultaneous value queries and computation, and is optimal with respect to the value query and computational/succinct representation models. When k = Θ(log(m)), this improves the previous best approximation guarantee for polynomial communication maximal-in-range mechanisms from O(m/√log(m)) to O(m/log(m)).
The competition complexity of an auction setting is the number of additional bidders needed such that the simple mechanism of selling items separately (with additional bidders) achieves greater revenue than … The competition complexity of an auction setting is the number of additional bidders needed such that the simple mechanism of selling items separately (with additional bidders) achieves greater revenue than the optimal but complex (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders. Our main result settles the competition complexity of $n$ bidders with additive values over $m < n$ independent items at $\Theta(\sqrt{nm})$. The $O(\sqrt{nm})$ upper bound is due to [BW19], and our main result improves the prior lower bound of $\Omega(\ln n)$ to $\Omega(\sqrt{nm})$. Our main result follows from an explicit construction of a Bayesian IC auction for $n$ bidders with additive values over $m<n$ independent items drawn from the Equal Revenue curve truncated at $\sqrt{nm}$ ($\mathcal{ER}_{\le \sqrt{nm}}$), which achieves revenue that exceeds $\text{SRev}_{n+\sqrt{nm}}(\mathcal{ER}_{\le \sqrt{nm}}^m)$. Along the way, we show that the competition complexity of $n$ bidders with additive values over $m$ independent items is exactly equal to the minimum $c$ such that $\text{SRev}_{n+c}(\mathcal{ER}_{\le p}^m) \geq \text{Rev}_n(\mathcal{ER}_{\le p}^m)$ for all $p$ (that is, some truncated Equal Revenue witnesses the worst-case competition complexity). Interestingly, we also show that the untruncated Equal Revenue curve does not witness the worst-case competition complexity when $n > m$: $\text{SRev}_n(\mathcal{ER}^m) = nm+O_m(\ln (n)) \leq \text{SRev}_{n+O_m(\ln (n))}(\mathcal{ER}^m)$, and therefore our result can only follow by considering all possible truncations.
Many real-life contractual relations differ completely from the clean, static model at the heart of principal-agent theory. Typically, they involve repeated strategic interactions of the principal and agent, taking place … Many real-life contractual relations differ completely from the clean, static model at the heart of principal-agent theory. Typically, they involve repeated strategic interactions of the principal and agent, taking place under uncertainty and over time. While appealing in theory, players seldom use complex dynamic strategies in practice, often preferring to circumvent complexity and approach uncertainty through learning. We initiate the study of repeated contracts with a learning agent, focusing on agents who achieve no-regret outcomes. Optimizing against a no-regret agent is a known open problem in general games; we achieve an optimal solution to this problem for a canonical contract setting, in which the agent's choice among multiple actions leads to success/failure. The solution has a surprisingly simple structure: for some $\alpha > 0$, initially offer the agent a linear contract with scalar $\alpha$, then switch to offering a linear contract with scalar $0$. This switch causes the agent to ``free-fall'' through their action space and during this time provides the principal with non-zero reward at zero cost. Despite apparent exploitation of the agent, this dynamic contract can leave \emph{both} players better off compared to the best static contract. Our results generalize beyond success/failure, to arbitrary non-linear contracts which the principal rescales dynamically. Finally, we quantify the dependence of our results on knowledge of the time horizon, and are the first to address this consideration in the study of strategizing against learning agents.
Consider a round-robin tournament on n teams, where a winner must be (possibly randomly) selected as a function of the results from the ${n \choose 2}$ pairwise matches. A tournament … Consider a round-robin tournament on n teams, where a winner must be (possibly randomly) selected as a function of the results from the ${n \choose 2}$ pairwise matches. A tournament rule is said to be k-SNM-${\alpha}$ if no set of k teams can ever manipulate the ${k \choose 2}$ pairwise matches between them to improve 2 the joint probability that one of these k teams wins by more than ${\alpha}$. Prior work identifies multiple simple tournament rules that are 2-SNM-1/3 (Randomized Single Elimination Bracket [SSW17], Randomized King of the Hill [SWZZ20], Randomized Death Match [DW21]), which is optimal for k = 2 among all Condorcet-consistent rules (that is, rules that select an undefeated team with probability 1). Our main result establishes that Randomized Death Match is 3-SNM-(31/60), which is tight (for Randomized Death Match). This is the first tight analysis of any Condorcet-consistent tournament rule and at least three manipulating teams. Our proof approach is novel in this domain: we explicitly find the most-manipulable tournament, and directly show that no other tournament can be more manipulable. In addition to our main result, we establish that Randomized Death Match disincentivizes Sybil attacks (where a team enters multiple copies of themselves into the tournament, and arbitrarily manipulates the outcomes of matches between their copies). Specifically, for any tournament, and any team u that is not a Condorcet winner, the probability that u or one of its Sybils wins in Randomized Death Match approaches 0 as the number of Sybils approaches $\infty$.
For a set $M$ of $m$ elements, we define a decreasing chain of classes of normalized monotone-increasing valuation functions from $2^M$ to $\mathbb{R}_{\geq 0}$, parameterized by an integer $q \in … For a set $M$ of $m$ elements, we define a decreasing chain of classes of normalized monotone-increasing valuation functions from $2^M$ to $\mathbb{R}_{\geq 0}$, parameterized by an integer $q \in [2,m]$. For a given $q$, we refer to the class as $q$-partitioning. A valuation function is subadditive if and only if it is $2$-partitioning, and fractionally subadditive if and only if it is $m$-partitioning. Thus, our chain establishes an interpolation between subadditive and fractionally subadditive valuations. We show that this interpolation is smooth, interpretable , and non-trivial. We interpolate prior results that separate subadditive and fractionally subadditive for all $q \in \{2,\ldots, m\}$. Two highlights are the following:(i) An $\Omega \left(\frac{\log \log q}{\log \log m}\right)$-competitive posted price mechanism for $q$-partitioning valuations. Note that this matches asymptotically the state-of-the-art for both subadditive ($q=2$) [DKL20], and fractionally subadditive ($q=m$) [FGL15]. (ii)Two upper-tail concentration inequalities on $1$-Lipschitz, $q$-partitioning valuations over independent items. One extends the state-of-the-art for $q=m$ to $q<m$, the other improves the state-of-the-art for $q=2$ for $q > 2$. Our concentration inequalities imply several corollaries that interpolate between subadditive and fractionally subadditive, for example: $\mathbb{E}[v(S)]\le (1 + 1/\log q)\text{Median}[v(S)] + O(\log q)$. To prove this, we develop a new isoperimetric inequality using Talagrand's method of control by $q$ points, which may be of independent interest. We also discuss other probabilistic inequalities and game-theoretic applications of $q$-partitioning valuations, and connections to subadditive MPH-$k$ valuations [EFNTW19].
We consider the problem of repeatedly auctioning a single item to multiple i.i.d buyers who each use a no-regret learning algorithm to bid over time. In particular, we study the … We consider the problem of repeatedly auctioning a single item to multiple i.i.d buyers who each use a no-regret learning algorithm to bid over time. In particular, we study the seller's optimal revenue, if they know that the buyers are no-regret learners (but only that their behavior satisfies some no-regret property -- they do not know the precise algorithm/heuristic used). Our main result designs an auction that extracts revenue equal to the full expected welfare whenever the buyers are "mean-based" (a property satisfied by standard no-regret learning algorithms such as Multiplicative Weights, Follow-the-Perturbed-Leader, etc.). This extends a main result of [BMSW18] which held only for a single buyer. Our other results consider the case when buyers are mean-based but never overbid. On this front, [BMSW18] provides a simple LP formulation for the revenue-maximizing auction for a single-buyer. We identify several formal barriers to extending this approach to multiple buyers.
Seminal work of Eyal and Sirer (2014) establishes that a strategic Bitcoin miner may strictly profit by deviating from the intended Bitcoin protocol, using a strategy now termed *selfish mining*. … Seminal work of Eyal and Sirer (2014) establishes that a strategic Bitcoin miner may strictly profit by deviating from the intended Bitcoin protocol, using a strategy now termed *selfish mining*. More specifically, any miner with $>1/3$ of the total hashrate can earn bitcoin at a faster rate by selfish mining than by following the intended protocol (depending on network conditions, a lower fraction of hashrate may also suffice). One convincing critique of selfish mining in practice is that the presence of a selfish miner is *statistically detectable*: the pattern of orphaned blocks created by the presence of a selfish miner cannot be explained by natural network delays. Therefore, if an attacker chooses to selfish mine, users can detect this, and this may (significantly) negatively impact the value of BTC. So while the attacker may get slightly more bitcoin by selfish mining, these bitcoin may be worth significantly less USD. We develop a selfish mining variant that is provably *statistically undetectable*: the pattern of orphaned blocks is statistically identical to a world with only honest miners but higher network delay. Specifically, we consider a stylized model where honest miners with network delay produce orphaned blocks at each height independently with probability $\beta'$. We propose a selfish mining strategy that instead produces orphaned blocks at each height independently with probability $\beta > \beta'$. We further show that our strategy is strictly profitable for attackers with $38.2\% \ll 50\%$ of the total hashrate (and this holds for all natural orphan rates $\beta'$).
We provide a simple $(1-O(\frac{1}{\sqrt{k}}))$-selectable Online Contention Resolution Scheme for $k$-uniform matroids against a fixed-order adversary. If $A_i$ and $G_i$ denote the set of selected elements and the set of … We provide a simple $(1-O(\frac{1}{\sqrt{k}}))$-selectable Online Contention Resolution Scheme for $k$-uniform matroids against a fixed-order adversary. If $A_i$ and $G_i$ denote the set of selected elements and the set of realized active elements among the first $i$ (respectively), our algorithm selects with probability $1-\frac{1}{\sqrt{k}}$ any active element $i$ such that $|A_{i-1}| + 1 \leq (1-\frac{1}{\sqrt{k}})\cdot \mathbb{E}[|G_i|]+\sqrt{k}$. This implies a $(1-O(\frac{1}{\sqrt{k}}))$ prophet inequality against fixed-order adversaries for $k$-uniform matroids that is considerably simpler than previous algorithms [Ala14, AKW14, JMZ22]. We also prove that no OCRS can be $(1-\Omega(\sqrt{\frac{\log k}{k}}))$-selectable for $k$-uniform matroids against an almighty adversary. This guarantee is matched by the (known) simple greedy algorithm that accepts every active element with probability $1-\Theta(\sqrt{\frac{\log k}{k}})$ [HKS07].
Despite having the same basic prophet inequality setup and model of loss aversion, conclusions in our multi-dimensional model differs considerably from the one-dimensional model of Kleinberg et al. For example, … Despite having the same basic prophet inequality setup and model of loss aversion, conclusions in our multi-dimensional model differs considerably from the one-dimensional model of Kleinberg et al. For example, Kleinberg et al. gives a tight closed-form on the competitive ratio that an online decision-maker can achieve as a function of $\lambda$, for any $\lambda \geq 0$. In our multi-dimensional model, there is a sharp phase transition: if $k$ denotes the number of dimensions, then when $\lambda \cdot (k-1) \geq 1$, no non-trivial competitive ratio is possible. On the other hand, when $\lambda \cdot (k-1) < 1$, we give a tight bound on the achievable competitive ratio (similar to Kleinberg et al.). As another example, Kleinberg et al. uncovers an exponential improvement in their competitive ratio for the random-order vs. worst-case prophet inequality problem. In our model with $k\geq 2$ dimensions, the gap is at most a constant-factor. We uncover several additional key differences in the multi- and single-dimensional models.
We introduce locality: a new property of multi-bidder auctions that formally separates the simplicity of optimal single-dimensional multi-bidder auctions from the complexity of optimal multi-dimensional multi-bidder auctions. Specifically, consider the … We introduce locality: a new property of multi-bidder auctions that formally separates the simplicity of optimal single-dimensional multi-bidder auctions from the complexity of optimal multi-dimensional multi-bidder auctions. Specifically, consider the revenue-optimal, Bayesian Incentive Compatible auction for buyers with valuations drawn from D-> :=xi Di, where each distribution has support-size n. This auction takes as input a valuation profile v-> and produces as output an allocation of the items and prices to charge, Opt D-> (v->). When each Di is single-dimensional, this mapping is locally-implementable: defining each input vi requires Θ(log n) bits, and Opt D-> (v->) can be fully determined using just Θ(log n) bits from each Di. This follows immediately from Myerson's virtual value theory [36].
Cryptographic Self-Selection is a subroutine used to select a leader for modern proof-of-stake consensus protocols. In cryptographic self-selection, each round r has a seed Qr. In round r, each account … Cryptographic Self-Selection is a subroutine used to select a leader for modern proof-of-stake consensus protocols. In cryptographic self-selection, each round r has a seed Qr. In round r, each account owner is asked to digitally sign Qr, hash their digital signature to produce a credential, and then broadcast this credential to the entire network. A publicly-known function scores each credential in a manner so that the distribution of the lowest scoring credential is identical to the distribution of stake owned by each account. The user who broadcasts the lowest-scoring credential is the leader for round r, and their credential becomes the seed Qr+1. Such protocols leave open the possibility of manipulation: a user who owns multiple accounts that each produce low-scoring credentials in round r can selectively choose which ones to broadcast in order to influence the seed for round r+1. Indeed, the user can pre-compute their credentials for round r+1 for each potential seed, and broadcast only the credential (among those with low enough score to be leader) that produces the most favorable seed.
We argue that the concentrated production and ownership of Bitcoin mining hardware arise naturally from the economic incentives of Bitcoin mining. We model Bitcoin mining as a two-stage competition; miners … We argue that the concentrated production and ownership of Bitcoin mining hardware arise naturally from the economic incentives of Bitcoin mining. We model Bitcoin mining as a two-stage competition; miners compete in prices to sell hardware while competing in quantities for mining rewards. We characterize equilibria in our model and show that small asymmetries in operational costs result in highly concentrated ownership of mining equipment. We further show that production of mining equipment will be dominated by the miner with the most efficient hardware, who will sell hardware to competitors while possibly also using it to mine. This paper was accepted by Kay Giesecke, finance.
We consider a revenue-maximizing seller with a single item for sale to multiple buyers with i.i.d. valuations. Akbarpour and Li (2020) show that the only optimal, credible, strategyproof auction is … We consider a revenue-maximizing seller with a single item for sale to multiple buyers with i.i.d. valuations. Akbarpour and Li (2020) show that the only optimal, credible, strategyproof auction is the ascending price auction with reserves which has unbounded communication complexity. Recent work of Ferreira and Weinberg (2020) circumvents their impossibility result assuming the existence of cryptographically secure commitment schemes, and designs a two-round credible, strategyproof, optimal auction. However, their auction is only credible when buyers' valuations are MHR or $\alpha$-strongly regular: they show their auction might not be credible even when there is a single buyer drawn from a non-MHR distribution. In this work, under the same cryptographic assumptions, we identify a new single-item auction that is credible, strategyproof, revenue optimal, and terminates in constant rounds in expectation for all distributions with finite monopoly price.
We consider a revenue-maximizing seller with $k$ heterogeneous items for sale to a single additive buyer, whose values are drawn from a known, possibly correlated prior $\mathcal{D}$. It is known … We consider a revenue-maximizing seller with $k$ heterogeneous items for sale to a single additive buyer, whose values are drawn from a known, possibly correlated prior $\mathcal{D}$. It is known that there exist priors $\mathcal{D}$ such that simple mechanisms -- those with bounded menu complexity -- extract an arbitrarily small fraction of the optimal revenue. This paper considers the opposite direction: given a correlated distribution $\mathcal{D}$ witnessing an infinite separation between simple and optimal mechanisms, what can be said about $\mathcal{D}$? Previous work provides a framework for constructing such $\mathcal{D}$: it takes as input a sequence of $k$-dimensional vectors satisfying some geometric property, and produces a $\mathcal{D}$ witnessing an infinite gap. Our first main result establishes that this framework is without loss: every $\mathcal{D}$ witnessing an infinite separation could have resulted from this framework. Even earlier work provided a more streamlined framework. Our second main result establishes that this restrictive framework is not tight. That is, we provide an instance $\mathcal{D}$ witnessing an infinite gap, but which provably could not have resulted from the restrictive framework. As a corollary, we discover a new kind of mechanism which can witness these infinite separations on instances where the previous ''aligned'' mechanisms do not.
We consider prophet inequalities subject to feasibility constraints that are the intersection of $q$ matroids. The best-known algorithms achieve a $\Theta(q)$-approximation, even when restricted to instances that are the intersection … We consider prophet inequalities subject to feasibility constraints that are the intersection of $q$ matroids. The best-known algorithms achieve a $\Theta(q)$-approximation, even when restricted to instances that are the intersection of $q$ partition matroids, and with i.i.d.~Bernoulli random variables. The previous best-known lower bound is $\Theta(\sqrt{q})$ due to a simple construction of [Kleinberg-Weinberg STOC 2012] (which uses i.i.d.~Bernoulli random variables, and writes the construction as the intersection of partition matroids). We establish an improved lower bound of $q^{1/2+\Omega(1/\log \log q)}$ by writing the construction of [Kleinberg-Weinberg STOC 2012] as the intersection of asymptotically fewer partition matroids. We accomplish this via an improved upper bound on the product dimension of a graph with $p^p$ disjoint cliques of size $p$, using recent techniques developed in [Alon-Alweiss European Journal of Combinatorics 2020].
We consider the problem of query-efficient global max-cut on a weighted undirected graph in the value oracle model examined by [RSW18]. This model arises as a natural special case of … We consider the problem of query-efficient global max-cut on a weighted undirected graph in the value oracle model examined by [RSW18]. This model arises as a natural special case of submodular function maximization: on query $S \subseteq V$, the oracle returns the total weight of the cut between $S$ and $V \backslash S$. For most constants $c \in (0,1]$, we nail down the query complexity of achieving a $c$-approximation, for both deterministic and randomized algorithms (up to logarithmic factors). Analogously to general submodular function maximization in the same model, we observe a phase transition at $c = 1/2$: we design a deterministic algorithm for global $c$-approximate max-cut in $O(\log n)$ queries for any $c < 1/2$, and show that any randomized algorithm requires $\tilde{\Omega}(n)$ queries to find a $c$-approximate max-cut for any $c > 1/2$. Additionally, we show that any deterministic algorithm requires $\Omega(n^2)$ queries to find an exact max-cut (enough to learn the entire graph), and develop a $\tilde{O}(n)$-query randomized $c$-approximation for any $c < 1$. Our approach provides two technical contributions that may be of independent interest. One is a query-efficient sparsifier for undirected weighted graphs (prior work of [RSW18] holds only for unweighted graphs). Another is an extension of the cut dimension to rule out approximation (prior work of [GPRW20] introducing the cut dimension only rules out exact solutions).
Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. … Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. The conjecture still remains open despite substantial partial progress, including constant-competitive algorithms for numerous special cases of matroids, and an $O(\log \log \text{rank})$-competitive algorithm in the general case. Many of these algorithms follow principled frameworks. The limits of these frameworks are previously unstudied, and prior work establishes only that a handful of particular algorithms cannot resolve the matroid secretary conjecture. We initiate the study of impossibility results for frameworks to resolve this conjecture. We establish impossibility results for a natural class of greedy algorithms and for randomized partition algorithms, both of which contain known algorithms that resolve special cases.
We study the problem of repeatedly auctioning off an item to one of k bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism … We study the problem of repeatedly auctioning off an item to one of k bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism at any point, and c) the bidders' valuations are adversarially chosen (the prior-free setting). Without these constraints, the auctioneer can run a second-price auction to "sell the business" and receive the second highest total value for the entire stream of items. We show that under these constraints, the auctioneer can attain a constant fraction of the "sell the business" benchmark, but no more than $2/e$ of this benchmark.
All proper scoring rules incentivize an expert to predict \emph{accurately} (report their true estimate), but not all proper scoring rules equally incentivize \emph{precision}. Rather than treating the expert's belief as … All proper scoring rules incentivize an expert to predict \emph{accurately} (report their true estimate), but not all proper scoring rules equally incentivize \emph{precision}. Rather than treating the expert's belief as exogenously given, we consider a model where a rational expert can endogenously refine their belief by repeatedly paying a fixed cost, and is incentivized to do so by a proper scoring rule. Specifically, our expert aims to predict the probability that a biased coin flipped tomorrow will land heads, and can flip the coin any number of times today at a cost of $c$ per flip. Our first main result defines an \emph{incentivization index} for proper scoring rules, and proves that this index measures the expected error of the expert's estimate (where the number of flips today is chosen adaptively to maximize the predictor's expected payoff). Our second main result finds the unique scoring rule which optimizes the incentivization index over all proper scoring rules. We also consider extensions to minimizing the $\ell^{th}$ moment of error, and again provide an incentivization index and optimal proper scoring rule. In some cases, the resulting scoring rule is differentiable, but not infinitely differentiable. In these cases, we further prove that the optimum can be uniformly approximated by polynomial scoring rules. Finally, we compare common scoring rules via our measure, and include simulations confirming the relevance of our measure even in domains outside where it provably applies.
Proof-of-Stake blockchains based on a longest-chain consensus protocol are an attractive energy-friendly alternative to the Proof-of-Work paradigm. However, formal barriers to "getting the incentives right" were recently discovered, driven by … Proof-of-Stake blockchains based on a longest-chain consensus protocol are an attractive energy-friendly alternative to the Proof-of-Work paradigm. However, formal barriers to "getting the incentives right" were recently discovered, driven by the desire to use the blockchain itself as a source of pseudorandomness. We consider instead a longest-chain Proof-of-Stake protocol with perfect, trusted, external randomness (e.g. a randomness beacon). We produce two main results. First, we show that a strategic miner can strictly outperform an honest miner with just 32.8% of the total stake. Note that a miner of this size cannot outperform an honest miner in the Proof-of-Work model. This establishes that even with access to a perfect randomness beacon, incentives in Proof-of-Work and Proof-of-Stake longest-chain protocols are fundamentally different. Second, we prove that a strategic miner cannot outperform an honest miner with 30.8% of the total stake. This means that, while not quite as secure as the Proof-of-Work regime, desirable incentive properties of Proof-of-Work longest-chain protocols can be approximately recovered via Proof-of-Stake with a perfect randomness beacon. The space of possible strategies in a Proof-of-Stake mining game is significantly richer than in a Proof-of-Work game. Our main technical contribution is a characterization of potentially optimal strategies for a strategic miner, and in particular a proof that the corresponding infinite-state MDP admits an optimal strategy that is positive recurrent.
Proof-of-Stake blockchains based on a longest-chain consensus protocol are an attractive energy-friendly alternative to the Proof-of-Work paradigm. However, formal barriers to getting the incentives right were recently discovered, driven by … Proof-of-Stake blockchains based on a longest-chain consensus protocol are an attractive energy-friendly alternative to the Proof-of-Work paradigm. However, formal barriers to getting the incentives right were recently discovered, driven by the desire to use the blockchain itself as a source of pseudorandomness \cite{brown2019formal}. We consider instead a longest-chain Proof-of-Stake protocol with perfect, trusted, external randomness (e.g. a randomness beacon). We produce two main results. First, we show that a strategic miner can strictly outperform an honest miner with just $32.8\%$ of the total stake. Note that a miner of this size {\em cannot} outperform an honest miner in the Proof-of-Work model. This establishes that even with access to a perfect randomness beacon, incentives in Proof-of-Work and Proof-of-Stake longest-chain protocols are fundamentally different. Second, we prove that a strategic miner cannot outperform an honest miner with $30.8\%$ of the total stake. This means that, while not quite as secure as the Proof-of-Work regime, desirable incentive properties of Proof-of-Work longest-chain protocols can be approximately recovered via Proof-of-Stake with a perfect randomness beacon. The space of possible strategies in a Proof-of-Stake mining game is {\em significantly} richer than in a Proof-of-Work game. Our main technical contribution is a characterization of potentially optimal strategies for a strategic miner, and in particular, a proof that the corresponding infinite-state MDP admits an optimal strategy that is positive recurrent.
We consider the problem of implementing a fixed social choice function between multiple players (which takes as input a type ti from each player i and outputs an outcome f(t1,…, … We consider the problem of implementing a fixed social choice function between multiple players (which takes as input a type ti from each player i and outputs an outcome f(t1,…, tn)), in which each player must be incentivized to follow the protocol. In particular, we study the communication requirements of a protocol which: (a) implements f, (b) implements f and computes payments that make it ex-post incentive compatible (EPIC) to follow the protocol, and (c) implements f and computes payments in a way that makes it dominant-strategy incentive compatible (DSIC) to follow the protocol.
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. Theoretical approaches to the problem have hit some limits in the past decades and … Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. Theoretical approaches to the problem have hit some limits in the past decades and analytical solutions are known for only a few simple settings. Computational approaches to the problem through the use of LPs have their own set of limitations. Building on the success of deep learning, a new approach was recently proposed by Duetting et al. (2019) in which the auction is modeled by a feed-forward neural network and the design problem is framed as a learning problem. The neural architectures used in that work are general purpose and do not take advantage of any of the symmetries the problem could present, such as permutation equivariance. In this work, we consider auction design problems that have permutation-equivariant symmetry and construct a neural architecture that is capable of perfectly recovering the permutation-equivariant optimal mechanism, which we show is not possible with the previous architecture. We demonstrate that permutation-equivariant architectures are not only capable of recovering previous results, they also have better generalization properties.
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction … Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction initiated by Duetting et al. (2019) consists in building neural network architectures to find optimal auctions. We propose two conceptual deviations from their approach which result in enhanced performance. First, we use recent results in theoretical auction design to introduce a time-independent Lagrangian. This not only circumvents the need for an expensive hyper-parameter search (as in prior work), but also provides a single metric to compare the performance of two auctions (absent from prior work). Second, the optimization procedure in previous work uses an inner maximization loop to compute optimal misreports. We amortize this process through the introduction of an additional neural network. We demonstrate the effectiveness of our approach by learning competitive or strictly improved auctions compared to prior work. Both results together further imply a novel formulation of Auction Design as a two-player game with stationary utility functions.
We study the problem of repeatedly auctioning off an item to one of $k$ bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism … We study the problem of repeatedly auctioning off an item to one of $k$ bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism at any point, and c) the bidders' valuations are adversarially chosen (the prior-free setting). Without these constraints, the auctioneer can run a second-price auction to the business and receive the second highest total value for the entire stream of items. We show that under these constraints, the auctioneer can attain a constant fraction of the the business benchmark, but no more than $2/e$ of this benchmark. In the course of doing so, we design mechanisms for a single bidder problem of independent interest: how should you repeatedly sell an item to a (per-round IR) buyer with adversarial valuations if you know their total value over all rounds is $V$ but not how their value changes over time? We demonstrate a mechanism that achieves revenue $V/e$ and show that this is tight.
We consider the manipulability of tournament rules which map the results of $\binom{n}{2}$ pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of … We consider the manipulability of tournament rules which map the results of $\binom{n}{2}$ pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than $1/3$, and this is the best possible among any Condorcet-consistent tournament rule (which selects an undefeated team whenever one exists) [Schneider et al., 2017, Schvartzman et al., 2020]. These lower bounds require the manipulators to know precisely the outcome of all future matches. We take a beyond worst-case view and instead consider tournaments which are close to uniform: the outcome of all matches are independent, and no team is believed to win any match with probability exceeding $1/2+\varepsilon$. We show that Randomized Single Elimination Bracket [Schneider et al., 2017] and a new tournament rule we term Randomized Death Match have the property that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than $\varepsilon/3 + 2\varepsilon^2/3$, for all $\varepsilon$, and this is the best possible among any Condorcet-consistent tournament rule. Our main technical contribution is a recursive framework to analyze the manipulability of certain forms of tournament rules. In addition to our main results, this view helps streamline previous analysis of Randomized Single Elimination Bracket, and may be of independent interest.
We consider the manipulability of tournament rules which map the results of $\binom{n}{2}$ pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of … We consider the manipulability of tournament rules which map the results of $\binom{n}{2}$ pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than $1/3$, and this is the best possible among any Condorcet-consistent tournament rule (which selects an undefeated team whenever one exists) [Schneider et al., 2017, Schvartzman et al., 2020]. These lower bounds require the manipulators to know precisely the outcome of all future matches. We take a beyond worst-case view and instead consider tournaments which are "close to uniform": the outcome of all matches are independent, and no team is believed to win any match with probability exceeding $1/2+\varepsilon$. We show that Randomized Single Elimination Bracket [Schneider et al., 2017] and a new tournament rule we term Randomized Death Match have the property that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than $\varepsilon/3 + 2\varepsilon^2/3$, for all $\varepsilon$, and this is the best possible among any Condorcet-consistent tournament rule. Our main technical contribution is a recursive framework to analyze the manipulability of certain forms of tournament rules. In addition to our main results, this view helps streamline previous analysis of Randomized Single Elimination Bracket, and may be of independent interest.
We consider the manipulability of tournament rules which map the results of $\binom{n}{2}$ pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of … We consider the manipulability of tournament rules which map the results of $\binom{n}{2}$ pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than $1/3$, and this is the best possible among any Condorcet-consistent tournament rule (which selects an undefeated team whenever one exists) [Schneider et al., 2017, Schvartzman et al., 2020]. These lower bounds require the manipulators to know precisely the outcome of all future matches. We take a beyond worst-case view and instead consider tournaments which are "close to uniform": the outcome of all matches are independent, and no team is believed to win any match with probability exceeding $1/2+\varepsilon$. We show that Randomized Single Elimination Bracket [Schneider et al., 2017] and a new tournament rule we term Randomized Death Match have the property that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than $\varepsilon/3 + 2\varepsilon^2/3$, for all $\varepsilon$, and this is the best possible among any Condorcet-consistent tournament rule. Our main technical contribution is a recursive framework to analyze the manipulability of certain forms of tournament rules. In addition to our main results, this view helps streamline previous analysis of Randomized Single Elimination Bracket, and may be of independent interest.
Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. … Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. The conjecture still remains open despite substantial partial progress, including constant-competitive algorithms for numerous special cases of matroids, and an $O(\log \log \text{rank})$-competitive algorithm in the general case. Many of these algorithms follow principled frameworks. The limits of these frameworks are previously unstudied, and prior work establishes only that a handful of particular algorithms cannot resolve the matroid secretary conjecture. We initiate the study of impossibility results for frameworks to resolve this conjecture. We establish impossibility results for a natural class of greedy algorithms and for randomized partition algorithms, both of which contain known algorithms that resolve special cases.
We study the problem of repeatedly auctioning off an item to one of $k$ bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism … We study the problem of repeatedly auctioning off an item to one of $k$ bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism at any point, and c) the bidders' valuations are adversarially chosen (the prior-free setting). Without these constraints, the auctioneer can run a second-price auction to "sell the business" and receive the second highest total value for the entire stream of items. We show that under these constraints, the auctioneer can attain a constant fraction of the "sell the business" benchmark, but no more than $2/e$ of this benchmark. In the course of doing so, we design mechanisms for a single bidder problem of independent interest: how should you repeatedly sell an item to a (per-round IR) buyer with adversarial valuations if you know their total value over all rounds is $V$ but not how their value changes over time? We demonstrate a mechanism that achieves revenue $V/e$ and show that this is tight.
Recent literature on approximately optimal revenue maximization has shown that in settings where agent valuations for items are complement free, the better of selling the items separately and bundling them … Recent literature on approximately optimal revenue maximization has shown that in settings where agent valuations for items are complement free, the better of selling the items separately and bundling them together guarantees a constant fraction of the optimal revenue. However, most real-world settings involve some degree of complementarity among items. The role that complementarity plays in the trade-off of simplicity versus optimality has been an obvious missing piece of the puzzle. In “A Simple and Approximately Optimal Mechanism for a Buyer with Complements,” the authors show that the same simple selling mechanism—the better of selling separately and as a grand bundle—guarantees a $\Theta(d)$ fraction of the optimal revenue, where $d$ is a measure of the degree of complementarity. One key modeling contribution is a tractable notion of “degree of complementarity” that admits meaningful results and insights—they demonstrate that previous definitions fall short in this regard.
Consider the problem of implementing a revenue-optimal, Bayesian Incentive Compatible auction when buyers' values are drawn from distributions $\times_i D_i$ on a particular instance $\vec{v}$. Optimal single-dimensional mechanisms are local: … Consider the problem of implementing a revenue-optimal, Bayesian Incentive Compatible auction when buyers' values are drawn from distributions $\times_i D_i$ on a particular instance $\vec{v}$. Optimal single-dimensional mechanisms are local: in order to allocate the item correctly on a particular valuation profile $\vec{v}$, only $\tilde{O}(1)$ bits are needed from each player (specifically, their Myerson virtual value [Mye81]), rather than the entire distribution. We show that optimal multi-dimensional mechanisms are not local: in order to allocate the item correctly on a particular valuation profile $\vec{v}$, one still needs to know (essentially) the entire distribution. Specifically, if the distributions have support-size $n$, then $\Omega(n)$ bits are necessary from each bidder. We show that this phenomenon already occurs with just two bidders, even when one bidder is single-dimensional, and even when the other bidder is barely multi-dimensional. More specifically, the multi-dimensional bidder is inter-dimensional from the FedEx setting with just two days [FGKK16].
We provide the first separation in the approximation guarantee achievable by truthful and non-truthful combinatorial auctions with polynomial communication. Specifically, we prove that any truthful mechanism guaranteeing a $(\frac{3}{4}-\frac{1}{240}+\varepsilon)$-approximation for … We provide the first separation in the approximation guarantee achievable by truthful and non-truthful combinatorial auctions with polynomial communication. Specifically, we prove that any truthful mechanism guaranteeing a $(\frac{3}{4}-\frac{1}{240}+\varepsilon)$-approximation for two buyers with XOS valuations over $m$ items requires $\exp(\Omega(\varepsilon^2 \cdot m))$ communication, whereas a non-truthful algorithm by Dobzinski and Schapira [SODA 2006] and Feige [2009] is already known to achieve a $\frac{3}{4}$-approximation in $poly(m)$ communication. We obtain our separation by proving that any {simultaneous} protocol ({not} necessarily truthful) which guarantees a $(\frac{3}{4}-\frac{1}{240}+\varepsilon)$-approximation requires communication $\exp(\Omega(\varepsilon^2 \cdot m))$. The taxation complexity framework of Dobzinski [FOCS 2016] extends this lower bound to all truthful mechanisms (including interactive truthful mechanisms).
We consider a revenue-maximizing single seller with $m$ items for sale to a single buyer whose value $v(\cdot)$ for the items is drawn from a known distribution $D$ of support … We consider a revenue-maximizing single seller with $m$ items for sale to a single buyer whose value $v(\cdot)$ for the items is drawn from a known distribution $D$ of support $k$. A series of works by Cai et al. establishes that when each $v(\cdot)$ in the support of $D$ is additive or unit-demand (or $c$-demand), the revenue-optimal auction can be found in $\operatorname{poly}(m,k)$ time. We show that going barely beyond this, even to matroid-based valuations (a proper subset of Gross Substitutes), results in strong hardness of approximation. Specifically, even on instances with $m$ items and $k \leq m$ valuations in the support of $D$, it is not possible to achieve a $1/m^{1-\varepsilon}$-approximation for any $\varepsilon>0$ to the revenue-optimal mechanism for matroid-based valuations in (randomized) poly-time unless NP $\subseteq$ RP (note that a $1/k$-approximation is trivial). Cai et al.'s main technical contribution is a black-box reduction from revenue maximization for valuations in class $\mathcal{V}$ to optimizing the difference between two values in class $\mathcal{V}$. Our main technical contribution is a black-box reduction in the other direction (for a wide class of valuation classes), establishing that their reduction is essentially tight.
We consider a revenue-maximizing single seller with mitems for sale to a single buyer whose value v(·) for the items is drawn from a known distribution Dof support k. A … We consider a revenue-maximizing single seller with mitems for sale to a single buyer whose value v(·) for the items is drawn from a known distribution Dof support k. A series of works by Cai et al. establishes that when each v(·) in the support of Dis additive or unit-demand (or c-demand), the revenue-optimal auction can be found in poly(m,k) time.
We consider the sale of a single item to multiple buyers by a revenue-maximizing seller. Recent work of Akbarpour and Li formalizes \emph{credibility} as an auction desideratum, and prove that … We consider the sale of a single item to multiple buyers by a revenue-maximizing seller. Recent work of Akbarpour and Li formalizes \emph{credibility} as an auction desideratum, and prove that the only optimal, credible, strategyproof auction is the ascending price auction with reserves (Akbarpour and Li, 2019). In contrast, when buyers' valuations are MHR, we show that the mild additional assumption of a cryptographically secure commitment scheme suffices for a simple \emph{two-round} auction which is optimal, strategyproof, and credible (even when the number of bidders is only known by the auctioneer). We extend our analysis to the case when buyer valuations are $\alpha$-strongly regular for any $\alpha > 0$, up to arbitrary $\varepsilon$ in credibility. Interestingly, we also prove that this construction cannot be extended to regular distributions, nor can the $\varepsilon$ be removed with multiple bidders.
We consider optimal (revenue maximizing) mechanism design in the interdimensional setting, where one dimension is the 'value' of the buyer, and the other is a 'type' that captures some auxiliary … We consider optimal (revenue maximizing) mechanism design in the interdimensional setting, where one dimension is the 'value' of the buyer, and the other is a 'type' that captures some auxiliary information. A prototypical example of this is the FedEx Problem, for which Fiat et al. [2016] characterize the optimal mechanism for a single agent. Another example of this is when the type encodes the buyer's budget [DW17]. The question we address is how far can such characterizations goIn particular, we consider the setting of single-minded agents. A seller has heterogenous items. A buyer has a valuation vfor a specific subset of items S, and obtains value vif and only if he gets all the items in S(and potentially some others too).
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction … Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction initiated by Duetting et al. (2019) consists in building neural network architectures to find optimal auctions. We propose two conceptual deviations from their approach which result in enhanced performance. First, we use recent results in theoretical auction design (Rubinstein and Weinberg, 2018) to introduce a time-independent Lagrangian. This not only circumvents the need for an expensive hyper-parameter search (as in prior work), but also provides a principled metric to compare the performance of two auctions (absent from prior work). Second, the optimization procedure in previous work uses an inner maximization loop to compute optimal misreports. We amortize this process through the introduction of an additional neural network. We demonstrate the effectiveness of our approach by learning competitive or strictly improved auctions compared to prior work. Both results together further imply a novel formulation of Auction Design as a two-player game with stationary utility functions.
We prove the first separation in the approximation guarantee achievable by truthful and non-truthful combinatorial auctions with polynomial communication. Specifically, we prove that any truthful auction guaranteeing a (34−1240+є)-approximation for … We prove the first separation in the approximation guarantee achievable by truthful and non-truthful combinatorial auctions with polynomial communication. Specifically, we prove that any truthful auction guaranteeing a (34−1240+є)-approximation for two buyers with XOS valuations over m items requires exp(Ω(ε2 · m)) communication whereas a non-truthful auction by Feige [J. Comput. 2009] is already known to achieve a 34-approximation in (m) communication.
We study information aggregation in networks where agents make binary decisions (labeled incorrect or correct). Agents initially form independent private beliefs about the better decision, which is correct with probability … We study information aggregation in networks where agents make binary decisions (labeled incorrect or correct). Agents initially form independent private beliefs about the better decision, which is correct with probability $1/2+\delta$. The dynamics we consider are asynchronous (each round, a single agent updates their announced decision) and non-Bayesian (agents simply copy the majority announcements among their neighbors, tie-breaking in favor of their private signal). Our main result proves that when the network is a tree formed according to the preferential attachment model \cite{BarabasiA99}, with high probability, the process stabilizes in a correct majority within $O(n \log n/ \log\log n)$ rounds. We extend our results to other tree structures, including balanced $M$-ary trees for any $M$.
We consider revenue-optimal mechanism design in the interdimensional setting, where one dimension is the 'value' of the buyer, and one is a 'type' that captures some auxiliary information. One setting … We consider revenue-optimal mechanism design in the interdimensional setting, where one dimension is the 'value' of the buyer, and one is a 'type' that captures some auxiliary information. One setting is the FedEx Problem, for which FGKK [2016] characterize the optimal mechanism for a single agent. We ask: how far can such characterizations go? In particular, we consider single-minded agents. A seller has heterogenous items. A buyer has a value v for a specific subset of items S, and obtains value v iff he gets (at least) all the items in S. We show: 1. Deterministic mechanisms are optimal for distributions that satisfy the declining marginal revenue (DMR) property; we give an explicit construction of the optimal mechanism. 2. Without DMR, the result depends on the structure of the directed acyclic graph (DAG) representing the partial order among types. When the DAG has out-degree at most 1, we characterize the optimal mechanism a la FedEx. 3. Without DMR, when the DAG has some node with out-degree at least 2, we show that in this case the menu complexity is unbounded: for any M, there exist distributions over (v,S) pairs such that the menu complexity of the optimal mechanism is at least M. 4. For the case of 3 types, we show that for all distributions there exists an optimal mechanism of finite menu complexity. This is in contrast to 2 additive heterogenous items or which the menu complexity could be uncountable [MV07; DDT15]. In addition, we prove that optimal mechanisms for Multi-Unit Pricing (without DMR) can have unbounded menu complexity. We also propose an extension where the menu complexity of optimal mechanisms can be countable but not uncountable. Together these results establish that optimal mechanisms in interdimensional settings are both much richer than single-dimensional settings, yet also vastly more structured than multi-dimensional settings.

Commonly Cited References

Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for … We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, … We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived … We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived from the buyer's type distribution satisfies certain stochastic dominance conditions.This measure expresses the marginal change in the seller's revenue under marginal changes in the rent paid to subsets of buyer types.As a corollary, we characterize the optimality of grand-bundling mechanisms, strengthening several results in the literature, where only sufficient optimality conditions have been derived.As an application, we show that the optimal mechanism for n independent uniform items each supported on [c, c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for 2 items [Pav11].At the same time, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c, c + 1] is not a grand bundling mechanism.
We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary … We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone sub modular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by … The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any … For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) buyers' types must be distributed independently (not necessarily identically), (ii) objective function must be linearly separable over the buyers, and (iii) except for the supply constraints, there should be no other inter-buyer constraints. Our framework is general in the sense that it makes no explicit assumption about buyers' valuations, type distributions, and single buyer constraints (e.g., budget, incentive compatibility, etc). We present two generic multi buyer mechanisms which use single buyer mechanisms as black boxes; if an $\alpha$-approximate single buyer mechanism can be constructed for each buyer, and if no buyer requires more than $\frac{1}{k}$ of all units of each item, then our generic multi buyer mechanisms are $\gamma_k\alpha$-approximation of the optimal multi buyer mechanism, where $\gamma_k$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. Observe that $\gamma_k$ is at least 1/2 (for $k=1$) and approaches 1 as $k \to \infty$. As a byproduct of our construction, we present a generalization of prophet inequalities. Furthermore, as applications of our framework, we present multi buyer mechanisms with improved approximation factor for several settings from the literature.
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds … We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [14] shows that randomness offers no benefit - the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al.[6] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call the common base value model) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.
We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the … We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the assumption that the buyers' values are additive subject to a matroid feasibility constraint and independent across items. Importantly, different buyers in our setting can have different constraints on the sets of items they desire. Our mechanism is a sequential variant of two-part tariffs. Prior to our work, simple approximation mechanisms for such multi-buyer problems were known only for the special cases of all unit-demand or all additive value buyers.
Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible … Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood.
We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand … We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand constraints, appropriately extending Myerson's single-dimensional result [21] to this setting. We also show that every feasible Bayesian auction - including in particular the revenue-optimal one - can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type ti is transformed into a virtual type fi(ti), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black-box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. Our results are computationally efficient for all multidimensional settings where the bidders are additive, or can be efficiently mapped to be additive, albeit the feasibility and demand constraints may still remain arbitrary combinatorial. In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The runtime can be further improved to polynomial in only the number of items and the number of bidders in itemsymmetric settings by making use of techniques from [15].
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
One of the fundamental questions of Algorithmic Mechanism Design is whether there exists an inherent clash between truthfulness and computational tractability: in particular, whether polynomial-time truthful mechanisms for combinatorial auctions … One of the fundamental questions of Algorithmic Mechanism Design is whether there exists an inherent clash between truthfulness and computational tractability: in particular, whether polynomial-time truthful mechanisms for combinatorial auctions are provably weaker in terms of approximation ratio than non-truthful ones. This question was very recently answered for universally truthful mechanisms for combinatorial auctions [4], and even for truthful-in-expectation mechanisms [12]. However, both of these results are based on information-theoretic arguments for valuations given by a value oracle, and leave open the possibility of polynomial-time truthful mechanisms for succinctly described classes of valuations.
We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly … We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that - unlike in the single item case - the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.
We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1/ 2 -ε must use exponentially many value queries, where … We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1/ 2 -ε must use exponentially many value queries, where m is the number of items. In contrast, ignoring incentives there exist constant ratio approximation algorithms for this problem. Our approach is based on a novel direct hardness technique that completely skips the notoriously hard step of characterizing truthful mechanisms. The characterization step was the main obstacle for proving impossibility results in algorithmic mechanism design so far. We demonstrate two additional applications of our new technique: (1) an impossibility result for universally-truthful polynomial time flexible combinatorial public projects and (2) an impossibility result for truthful-in-expectation mechanisms for exact combinatorial public projects. The latter is the first result that bounds the power of polynomial-time truthful in expectation mechanisms in any setting.
Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC … Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms.
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists … We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+ε)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5/3 approximation to the revenue achievable by an optimal deterministic truthful mechanism.
We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; … We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; however, the mechanism may additionally choose to bundle unconstrained attributes such as payments or add-ons with the service. An agent's preference over service and attributes is given by her private type and may be multi-dimensional and non-linear. A single-agent problem is to optimizes a menu to offer an agent subject to constraints on the probabilities with which each of the agent's types is served. We give computationally tractable reductions from multi-agent auction problems to these single-agent problems. Our discussion focuses on maximizing revenue, but our results can be applied to other objectives (e.g., welfare).
We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all ϵ >; 0, we … We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all ϵ >; 0, we obtain a (1 + ϵ)-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">poly(log r)</sup> when sampled from general distributions supported on a set [u <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">min</sub> ,ru <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">min</sub> ]. We also provide an additive PTAS for all bounded distributions. Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all ϵ >; 0, g(1/ϵ) distinct prices suffice to obtain a (1 + ϵ)-factor approximation to the optimal revenue for MHR distributions, where g(1/ϵ) is a quasi-linear function of 1/ϵ that does not depend on the number of items. Similarly, for all ϵ >; 0 and n >; 0, g(1/ϵ · log n) distinct prices suffice for regular distributions, where n is the number of items and g(·) is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of 1/ϵ, a single price suffices to achieve a (1 + ϵ)-factor approximation. Our results represent significant progress to the single-bidder case of the multidimensional optimal mechanism design problem, following Myerson's celebrated work on optimal mechanism design [Myerson 1981].
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- … For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- work can be applied to any setting which roughly satisfies the following assumptions: (i) the buyer's types must be distributed independently (not necessarily identically), (ii) the objective function must be linearly separable over the set of buyers, and (iii) the supply constraints must be the only constraints involving more than one buyer. Our framework is general in the sense that it makes no explicit assumption about any of the following: (i) the buyer's valuations (e.g., submodular, additive, etc), (ii) The distribution of types for each buyer, and (iii) the other constraints involving individual buyers (e.g., budget constraints, etc). We present two generic ra-buyer mechanisms that use 1- buyer mechanisms as black boxes. Assuming that we have an α-approximate 1-buyer mechanism for each buyer and assuming that no buyer ever needs more than 1/k of all copies of each item for some integer k ≥ 1, then our generic n- buyer mechanisms are γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> · α-approximation of the optimal n-buyer mechanism, in which γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is a constant which is at least 1 - 1/√(k+3). Observe that γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is at least1/2 (for k = 1) and approaches 1 as k increases. As a byproduct of our construction, we improve a generalization of prophet inequalities. Furthermore, as applications of our main theorem, we improve several results from the literature.
It was recently shown in [12] that revenue optimization can be computationally efficiently reduced to welfare optimization in all multi-dimensional Bayesian auction problems with arbitrary (possibly combinatorial) feasibility constraints and … It was recently shown in [12] that revenue optimization can be computationally efficiently reduced to welfare optimization in all multi-dimensional Bayesian auction problems with arbitrary (possibly combinatorial) feasibility constraints and independent additive bidders with arbitrary (possibly combinatorial) demand constraints. This reduction provides a poly-time solution to the optimal mechanism design problem in all auction settings where welfare optimization can be solved efficiently, but it is fragile to approximation and cannot provide solutions to settings where welfare maximization can only be tractably approximated. In this paper, we extend the reduction to accommodate approximation algorithms, providing an approximation preserving reduction from (truthful) revenue maximization to (not necessarily truthful) welfare maximization. The mechanisms output by our reduction choose allocations via black-box calls to welfare approximation on randomly selected inputs, thereby generalizing also our earlier structural results on optimal multi-dimensional mechanisms to approximately optimal mechanisms. Unlike [12], our results here are obtained through novel uses of the Ellipsoid algorithm and other optimization techniques over non-convex regions.
Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling one item to multiple bidders [18]. Generalizing this work to selling multiple items at once has been a central … Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling one item to multiple bidders [18]. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly understood. We answer this question by showing that a revenue-optimal auction in multi-item settings cannot be found and implemented computationally efficiently, unless zpp ⊇ P#P. This is true even for a single additive bidder whose values for the items are independently distributed on two rational numbers with rational probabilities. Our result is very general: we show that it is hard to compute any encoding of an optimal auction of any format (direct or indirect, truthful or non-truthful) that can be implemented in expected polynomial time. In particular, under well-believed complexity-theoretic assumptions, revenue-optimization in very simple multi-item settings can only be tractably approximated.We note that our hardness result applies to randomized mechanisms in a very simple setting, and is not an artifact of introducing combinatorial structure to the problem by allowing correlation among item values, introducing combinatorial valuations, or requiring the mechanism to be deterministic (whose structure is readily combinatorial). Our proof is enabled by a flow-interpretation of the solutions of an exponential-size linear program for revenue maximization with an additional supermodularity constraint.
We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bayesian auctions with arbitrary (possibly combinatorial) feasibility constraints and independent bidders with arbitrary (possibly combinatorial) demand constraints, appropriately … We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bayesian auctions with arbitrary (possibly combinatorial) feasibility constraints and independent bidders with arbitrary (possibly combinatorial) demand constraints, appropriately extending Myerson's result to this setting. We also show that every feasible Bayesian auction can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type t_i is transformed into a virtual type f_i(t_i), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. We obtain our reduction from revenue to welfare optimization via two algorithmic results on reduced forms in settings with arbitrary feasibility and demand constraints. First, we provide a separation oracle for determining feasibility of a reduced form. Second, we provide a geometric algorithm to decompose any feasible reduced form into a distribution over virtual VCG allocation rules. In addition, we show how to execute both algorithms given only black box access to an implementation of the VCG allocation rule. Our results are computationally efficient for all multi-dimensional settings where the bidders are additive. In this case, our mechanisms run in time polynomial in the total number of bidder types, but not type profiles. For generic correlated distributions, this is the natural description complexity of the problem. The runtime can be further improved to poly(#items, #bidders) in item-symmetric settings by making use of recent techniques.
We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is … We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer and if the value distribution conditioning on each signal is regular, the gap between the two is bounded by a multiplicative factor of 3. We give examples showing that both conditions are necessary for a constant bound on the gap to hold.We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if the distribution is jointly regular. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing \emph{any} objective under \emph{arbitrary} … We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing \emph{any} objective under \emph{arbitrary} feasibility constraints with \emph{arbitrary} bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds \emph{both ways}. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone submodular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness. This is the first instance of a truthful mechanism that optimizes a non-linear objective.
Optimal mechanisms have been provided in quite general multi-item settings [Cai et al. 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the … Optimal mechanisms have been provided in quite general multi-item settings [Cai et al. 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the support along with its associated probability. In the implicit setting, e.g. when the bidders have additive valuations with independent and/or continuous values for the items, these results do not apply, and it was recently shown that exact revenue optimization is intractable, even when there is only one bidder [Daskalakis et al. 2013]. Even for item distributions with special structure, optimal mechanisms have been surprisingly rare [Manelli and Vincent 2006] and the problem is challenging even in the two-item case [Hart and Nisan 2012]. In this paper, we provide a framework for designing optimal mechanisms using optimal transport theory and duality theory. We instantiate our framework to obtain conditions under which only pricing the grand bundle is optimal in multi-item settings (complementing the work of [Manelli and Vincent 2006]), as well as to characterize optimal two-item mechanisms. We use our results to derive closed-form descriptions of the optimal mechanism in several two-item settings, exhibiting also a setting where a continuum of lotteries is necessary for revenue optimization but a closed-form representation of the mechanism can still be found efficiently using our framework.
We show that the multiplicative weight update method provides a simple recipe for designing and analyzing optimal Bayesian Incentive Compatible (BIC) auctions, and reduces the time complexity of the problem … We show that the multiplicative weight update method provides a simple recipe for designing and analyzing optimal Bayesian Incentive Compatible (BIC) auctions, and reduces the time complexity of the problem to pseudo-polynomial in parameters that depend on single agent instead of depending on the size of the joint type space. We use this framework to design computationally efficient optimal auctions that satisfy ex-post Individual Rationality in the presence of constraints such as (hard, private) budgets and envy-freeness. We also design optimal auctions when buyers and a seller's utility functions are non-linear. Scenarios with such functions include (a) auctions with "quitting rights", (b) cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion. Finally, we show how our framework also yields optimal auctions for variety of auction settings considered in Alaei et al and Cai et al, albeit with pseudo-polynomial running times.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is … In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that … We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that guarantees a non-trivial approximation ratio of O(log^2 m) [STOC'06], where m is the number of items. This was subsequently improved to O( log m log log m) [Dobzinski, APPROX'07] and then to O(m) [Krysta and Vocking, ICALP'12]. In this paper we develop the first mechanism that breaks the logarithmic barrier. Specifically, the mechanism provides an approximation ratio of O( m). Similarly to previous constructions, our mechanism uses polynomially many value and demand queries, and in fact provides the same approximation ratio for the larger class of XOS (a.k.a. fractionally subadditive) valuations. We also develop a computationally efficient implementation of the mechanism for combinatorial auctions with budget additive bidders. Although in general computing a demand query is NP-hard for budget additive valuations, we observe that the specific form of demand queries that our mechanism uses can be efficiently computed when bidders are budget additive.
The question of the minimum menu-size for approximate (i.e., up-to-ε) Bayesian revenue maximization when selling two goods to an additive risk-neutral quasilinear buyer was introduced by Hart and Nisan [2013], … The question of the minimum menu-size for approximate (i.e., up-to-ε) Bayesian revenue maximization when selling two goods to an additive risk-neutral quasilinear buyer was introduced by Hart and Nisan [2013], who give an upper bound of O(1/ε4) for this problem. Using the optimal-transport duality framework of Daskalakis, Deckelbaum, and Tzamos [2013, 2015], we derive the first lower bound for this problem — of Ω(1/∜ε), even when the values for the two goods are drawn i.i.d. from "nice" distributions, establishing how to reason about approximately optimal mechanisms via this duality framework. This bound implies, for any fixed number of goods, a tight bound of Θ(log1/ε) on the minimum deterministic communication complexity guaranteed to suffice for running some approximately revenue-maximizing mechanism, thereby completely resolving this problem. As a secondary result, we show that under standard economic assumptions on distributions, the above upper bound of Hart and Nisan [2013] can be strengthened to O(1/ε2).
We show that the multiplicative weight update method provides a simple recipe for designing and analyzing optimal Bayesian Incentive Compatible (BIC) auctions, and reduces the time complexity of the problem … We show that the multiplicative weight update method provides a simple recipe for designing and analyzing optimal Bayesian Incentive Compatible (BIC) auctions, and reduces the time complexity of the problem to pseudo-polynomial in parameters that depend on single agent instead of depending on the size of the joint type space. We use this framework to design computationally efficient optimal auctions that satisfy ex-post Individual Rationality in the presence of constraints such as (hard, private) budgets and envy-freeness. We also design optimal auctions when buyers and a seller's utility functions are non-linear. Scenarios with such functions include (a) auctions with "quitting rights", (b) cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion. Finally, we show how our framework also yields optimal auctions for variety of auction settings considered in Alaei et al and Cai et al, albeit with pseudo-polynomial running times.
We derive exact optimal solutions for the problem of optimizing revenue in single-bidder multi-item auctions for uniform i.i.d. valuations. We give optimal auctions of up to 6 items; previous results … We derive exact optimal solutions for the problem of optimizing revenue in single-bidder multi-item auctions for uniform i.i.d. valuations. We give optimal auctions of up to 6 items; previous results were only known for up to three items. To do so, we develop a general duality framework for the general problem of maximizing revenue in many-bidders multi-item additive Bayesian auctions with continuous probability valuation distributions. The framework extends linear programming duality and complementarity to constraints with partial derivatives. The dual system reveals the geometric nature of the problem and highlights its connection with the theory of bipartite graph matchings. The duality framework is used not only for proving optimality, but perhaps more importantly, for deriving the optimal auction; as a result, the optimal auction is defined by natural geometric constraints.
We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter … We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments.
We study the strategic considerations of miners participating in the bitcoin's protocol. We formulate and study the stochastic game that underlies these strategic considerations. The miners collectively build a tree … We study the strategic considerations of miners participating in the bitcoin's protocol. We formulate and study the stochastic game that underlies these strategic considerations. The miners collectively build a tree of blocks, and they are paid when they create a node (mine a block) which will end up in the path of the tree that is adopted by all. Since the miners can hide newly mined nodes, they play a game with incomplete information. Here we consider two simplified forms of this game in which the miners have complete information. In the simplest game the miners release every mined block immediately, but are strategic on which blocks to mine. In the second more complicated game, when a block is mined it is announced immediately, but it may not be released so that other miners cannot continue mining from it. A miner not only decides which blocks to mine, but also when to release blocks to other miners. In both games, we show that when the computational power of each miner is relatively small, their best response matches the expected behavior of the bitcoin designer. However, when the computational power of a miner is large, he deviates from the expected behavior, and other Nash equilibria arise.
We consider revenue-optimal mechanism design for the case with one buyer and two items. The buyer's valuations towards the two items are independent and additive. In this setting, optimal mechanism … We consider revenue-optimal mechanism design for the case with one buyer and two items. The buyer's valuations towards the two items are independent and additive. In this setting, optimal mechanism is unknown for general valuation distributions. We obtain two categories of structural results that shed light on the optimal mechanisms. These results can be summarized into one conclusion: under certain conditions, the optimal mechanisms have simple menus.
The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory … The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [11], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning exactly the buyers' underlying distribution and run the revenue-maximizing auction tailored to this distribution.In this paper we study the Competition Complexity of dynamic auctions. Consider the following problem: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction? We prove that the Competition Complexity of dynamic auctions is at most 3n - and at least linear in n - even when the buyers' values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an α-approximation of the optimal revenue; we term this number the α-approximate Competition Complexity. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a -approximation. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first nontrivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages.A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of [14] to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.
Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter … Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction. We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 … The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
We consider the following communication problem: Alice and Bob each have some valuation functions υ1(·) and υ2(·) over subsets of m items, and their goal is to partition the items … We consider the following communication problem: Alice and Bob each have some valuation functions υ1(·) and υ2(·) over subsets of m items, and their goal is to partition the items into S, in a way that maximizes the welfare, . We study both the allocation problem, which asks for a welfare-maximizing partition and the decision problem, which asks whether or not there exists a partition guaranteeing certain welfare, for binary XOS valuations. For interactive protocols with poly(m) communication, a tight 3/4-approximation is known for both [29, 23].For interactive protocols, the allocation problem is provably harder than the decision problem: any solution to the allocation problem implies a solution to the decision problem with one additional round and log m additional bits of communication via a trivial reduction. Surprisingly, the allocation problem is provably easier for simultaneous protocols. Specifically, we show:•There exists a simultaneous, randomized protocol with polynomial communication that selects a partition whose expected welfare is at least 3/4 of the optimum. This matches the guarantee of the best interactive, randomized protocol with polynomial communication.•For all ε > 0, any simultaneous, randomized protocol that decides whether the welfare of the optimal partition is ≥ 1 or ≤ 3/4 – 1/108 + ε correctly with probability > 1/2 + 1/poly(m) requires exponential communication. This provides a separation between the attainable approximation guarantees via interactive (3/4) versus simultaneous (≤ 3/4 – 1/108) protocols with polynomial communication.In other words, this trivial reduction from decision to allocation problems provably requires the extra round of communication. We further discuss the implications of our results for the design of truthful combinatorial auctions in general, and extensions to general XOS valuations. In particular, our protocol for the allocation problem implies a new style of truthful mechanisms.
The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions ("one unit of computation, one … The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions ("one unit of computation, one vote''). This leads to enormous expenditure on hardware and electricity in order to collect the rewards associated with transaction authorization. Proof-of-Stake is an alternative concept that instead selects users to authorize transactions proportional to their wealth ("one coin, one vote"). Some aspects of the two paradigms are the same. For instance, obtaining voting power in Proof-of-Stake has a monetary cost just as in Proof-of-Work: a coin cannot be freely duplicated any more easily than a unit of computation. However some aspects are fundamentally different. In particular, exactly because Proof-of-Stake is wasteless, there is no inherent resource cost to deviating (commonly referred to as the "Nothing-at-Stake'' problem).
We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S, T, v(S ∪ … We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S, T, v(S ∪ T) < v(S) + v(T)) and complements (i.e., for some S, T, v(S ∪ T) > v(S) + v(T)). We show that the mechanism first proposed by Babaioff et al. [2014] -- the better of selling the items separately and bundling them together -- guarantees a Θ(d) fraction of the optimal revenue, where $d$ is a measure on the degree of complementarity. Note that this is the first approximately optimal mechanism for a buyer whose valuation exhibits any kind of complementarity. It extends the work of Rubinstein and Weinberg [2015], which proved that the same simple mechanisms achieve a constant factor approximation when buyer valuations are subadditive, the most general class of complement-free valuations.