Author Description

Login to generate an author description

Ask a Question About This Mathematician

We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We consider the problem of designing revenue maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential … We consider the problem of designing revenue maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential buyers ("agents") that are arriving sequentially. Each agent is interested in buying one item. Each agent's value for an item is an independent sample from some fixed (but unknown) distribution with support [0,1]. The seller offers a take-it-or-leave-it price to each arriving agent (possibly different for different agents), and aims to maximize his expected revenue.
We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked … We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare.
It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true for single-parameter domains: … It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true for single-parameter domains: creating a randomized truthful mechanism is essentially as easy as a single call to a monotone allocation function. Our main result is a general procedure to take a monotone allocation rule and transform it (via a black-box reduction) into a randomized mechanism that is truthful in expectation and individually rational for every realization. Moreover, the mechanism implements the same outcome as the original allocation rule with probability arbitrarily close to 1, and requires evaluating that allocation rule only once.
The buying and selling of information is taking place at a scale unprecedented in the history of commerce, thanks to the formation of online marketplaces for user data. Data providing … The buying and selling of information is taking place at a scale unprecedented in the history of commerce, thanks to the formation of online marketplaces for user data. Data providing agencies sell user information to advertisers to allow them to match ads to viewers more effectively. In this paper we study the design of optimal mechanisms for a monopolistic data provider to sell information to a buyer, in a model where both parties have (possibly correlated) private signals about a state of the world, and the buyer uses information learned from the seller, along with his own signal, to choose an action (e.g., displaying an ad) whose payoff depends on the state of the world.
With the recent technological feasibility of electronic commerce over the Internet, much attention has been given to the design of electronic markets for various types of electronically-tradable goods. Such markets, … With the recent technological feasibility of electronic commerce over the Internet, much attention has been given to the design of electronic markets for various types of electronically-tradable goods. Such markets, however, will normally need to function in some relationship with markets for other related goods, usually those downstream or upstream in the supply chain. Thus, for example, an electronic market for rubber tires for trucks will likely need to be strongly influenced by the rubber market as well as by the truck market. In this paper we design protocols for exchange of information between a sequence of markets along a single supply chain. These protocols allow each of these markets to function separately, while the information exchanged ensures efficient global behavior across the supply chain. Each market that forms a link in the supply chain operates as a double auction, where the bids on one side of the double auction come from bidders in the corresponding segment of the industry, and the bids on the other side are synthetically generated by the protocol to express the combined information from all other links in the chain. The double auctions in each of the markets can be of several types, and we study several variants of incentive compatible double auctions, comparing them in terms of their efficiency and of the market revenue.
We consider the problem of designing revenue-maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential buyers … We consider the problem of designing revenue-maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential buyers (“agents”) that are arriving sequentially. Each agent is interested in buying one item. Each agent’s value for an item is an independent sample from some fixed (but unknown) distribution with support [0,1]. The seller offers a take-it-or-leave-it price to each arriving agent (possibly different for different agents), and aims to maximize his expected revenue. We focus on mechanisms that do not use any information about the distribution; such mechanisms are called detail-free (or prior-independent ). They are desirable because knowing the distribution is unrealistic in many practical scenarios. We study how the revenue of such mechanisms compares to the revenue of the optimal offline mechanism that knows the distribution (“offline benchmark”). We present a detail-free online posted-price mechanism whose revenue is at most O(( k log n )2/3) less than the offline benchmark, for every distribution that is regular. In fact, this guarantee holds without any assumptions if the benchmark is relaxed to fixed-price mechanisms. Further, we prove a matching lower bound. The performance guarantee for the same mechanism can be improved to O (√ k log n ), with a distribution-dependent constant, if the ratio k / n is sufficiently small. We show that, in the worst case over all demand distributions, this is essentially the best rate that can be obtained with a distribution-specific constant. On a technical level, we exploit the connection to multiarmed bandits (MAB). While dynamic pricing with unlimited supply can easily be seen as an MAB problem, the intuition behind MAB approaches breaks when applied to the setting with limited supply. Our high-level conceptual contribution is that even the limited supply setting can be fruitfully treated as a bandit problem.
We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly … We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that - unlike in the single item case - the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.
Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report … Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report their true demand in response to prices. A line of research in economics, initiated by Hurwicz (1972), is devoted to understanding how such markets perform when agents are strategic about their demands. This is captured by the Walrasian Mechanism that proceeds by collecting reported demands, finding clearing prices in the reported market via an ascending price tatonnement procedure, and returns the resulting allocation. Similar mechanisms are used, for example, in the daily opening of the New York Stock Exchange and the call market for copper and gold in London.
We consider a multiround auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked … We consider a multiround auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the multi-armed bandit problem, and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by regret, the difference in social welfare between the algorithm and the benchmark which always selects the same "best" advertisement. We investigate how the design of multi-armed bandit algorithms is affected by the restriction that the resulting mechanism must be truthful. We find that deterministic truthful mechanisms have certain strong structural properties---essentially, they must separate exploration from exploitation---and they incur much higher regret than the optimal multi-armed bandit algorithms. Moreover, we provide a truthful mechanism which (essentially) matches our lower bound on regret.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations.
Complements between goods--where one good takes on added value in the presence of another--have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are … Complements between goods--where one good takes on added value in the presence of another--have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are common in the standard motivating applications for combinatorial auctions, like spectrum license auctions. On the other, welfare maximization in the presence of complements is notoriously difficult, and this intractability has stymied theoretical progress in the area. For example, there are no known positive results for combinatorial auctions in which bidder valuations are multi-parameter and non-complement-free, other than the relatively weak results known for general valuations.
Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods (Foley 1967, Varian 1974). Every agent receives an equal budget … Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods (Foley 1967, Varian 1974). Every agent receives an equal budget of artificial currency with which to purchase goods, and prices match demand and supply. However, a CEEI is not guaranteed to exist when the goods are indivisible even in the simple two-agent, single-item market. Yet it is easy to see that, once the two budgets are slightly perturbed (made generic), a competitive equilibrium does exist. In this paper, we aim to extend this approach beyond the single-item case and study the existence of equilibria in markets with two agents and additive preferences over multiple items. We show that, for agents with equal budgets, making the budgets generic—by adding vanishingly small random perturbations—ensures the existence of equilibrium. We further consider agents with arbitrary nonequal budgets, representing nonequal entitlements for goods. We show that competitive equilibrium guarantees a new notion of fairness among nonequal agents and that it exists in cases of interest (such as when the agents have identical preferences) if budgets are perturbed. Our results open opportunities for future research on generic equilibrium existence and fair treatment of nonequals.
Cloud computing has reached significant maturity from a systems perspective, but currently deployed solutions rely on rather basic economics mechanisms that yield suboptimal allocation of the costly hardware resources. In … Cloud computing has reached significant maturity from a systems perspective, but currently deployed solutions rely on rather basic economics mechanisms that yield suboptimal allocation of the costly hardware resources. In this paper we present Economic Resource Allocation (ERA), a complete framework for scheduling and pricing cloud resources, aimed at increasing the efficiency of cloud resources usage by allocating resources according to economic principles. The ERA architecture carefully abstracts the underlying cloud infrastructure, enabling the development of scheduling and pricing algorithms independently of the concrete lower-level cloud infrastructure and independently of its concerns. Specifically, ERA is designed as a flexible layer that can sit on top of any cloud system and interfaces with both the cloud resource manager and with the users who reserve resources to run their jobs. The jobs are scheduled based on prices that are dynamically calculated according to the predicted demand. Additionally, ERA provides a key internal API to pluggable algorithmic modules that include scheduling, pricing and demand prediction. We provide a proof-of-concept software and demonstrate the effectiveness of the architecture by testing ERA over both public and private cloud systems -- Azure Batch of Microsoft and Hadoop/YARN. A broader intent of our work is to foster collaborations between economics and system communities. To that end, we have developed a simulation platform via which economics and system experts can test their algorithmic implementations.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true: creating a randomized … It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true: creating a randomized truthful mechanism is essentially as easy as a single call to a monotone allocation rule. Our main result is a general procedure to take a monotone allocation rule for a single-parameter domain and transform it (via a black-box reduction) into a randomized mechanism that is truthful in expectation and individually rational for every realization. The mechanism implements the same outcome as the original allocation rule with probability arbitrarily close to 1, and requires evaluating that allocation rule only once. We also provide an extension of this result to multiparameter domains and cycle-monotone allocation rules, under mild star-convexity and nonnegativity hypotheses on the type space and allocation rule, respectively. Because our reduction is simple, versatile, and general, it has many applications to mechanism design problems in which re-evaluating the allocation rule is either burdensome or informationally impossible. Applying our result to the multiarmed bandit problem, we obtain truthful randomized mechanisms whose regret matches the information-theoretic lower bound up to logarithmic factors, even though prior work showed this is impossible for truthful deterministic mechanisms. We also present applications to offline mechanism design, showing that randomization can circumvent a communication complexity lower bound for deterministic payments computation, and that it can also be used to create truthful shortest path auctions that approximate the welfare of the VCG allocation arbitrarily well, while having the same running time complexity as Dijkstra's algorithm.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations. To gauge the robustness of our positive results, we also study $\epsilon$-dichotomous valuations, in which the added value of any item to a set is either non-positive, or in the range $[1, 1 + \epsilon]$. We show several impossibility results in this setting, and also a positive result: for players that have additive $\epsilon$-dichotomous valuations with sufficiently small $\epsilon$, we design a randomized truthful mechanism with strong ex-post guarantees. For $\rho = \frac{1}{1 + \epsilon}$, the allocations that it produces generate at least a $\rho$-fraction of the maximum welfare, and enjoy $\rho$-approximations for various fairness properties, such as being envy-free up to one item (EF1), and giving each player at least her maximin share.
We consider a single buyer with a combinatorial preference that would like to purchase related products and services from different vendors,where each vendor supplies exactly one product. We study the … We consider a single buyer with a combinatorial preference that would like to purchase related products and services from different vendors,where each vendor supplies exactly one product. We study the general case where subsets of products can be substitutes as well as complementary and analyze the game that is induced on the vendors, where a vendor's strategy is the price that he asks for his product. This model generalizes both Bertrand competition (where vendors are perfect substitutes) and Nash bargaining (where they are perfect complements), and captures a wide variety of scenarios that can appear in complex crowd sourcing or in automatic pricing of related products.
In this letter we briefly survey our main result from [Babaioff el al. 2014]: a simple and approximately revenue-optimal mechanism for a monopolist who wants to sell a variety of … In this letter we briefly survey our main result from [Babaioff el al. 2014]: a simple and approximately revenue-optimal mechanism for a monopolist who wants to sell a variety of items to a single buyer with an additive valuation.
In this paper we show that payment computation essentially does not present any obstacle in designing truthful mechanisms, even for multi-parameter domains, and even when we can only call the … In this paper we show that payment computation essentially does not present any obstacle in designing truthful mechanisms, even for multi-parameter domains, and even when we can only call the allocation rule once. We present a general reduction that takes any allocation rule which satisfies "cyclic monotonicity" (a known necessary and sufficient condition for truthfulness) and converts it to a truthful mechanism using a single call to the allocation rule, with arbitrarily small loss to the expected social welfare.
We consider the problem of fair allocation of indivisible goods to n agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) … We consider the problem of fair allocation of indivisible goods to n agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) serves as an attractive fairness criterion, where to qualify as fair, an allocation needs to give every agent at least a substantial fraction of her MMS. In this paper we consider the case of arbitrary (unequal) entitlements. We explain shortcomings in previous attempts that extend the MMS to unequal entitlements. Our conceptual contribution is the introduction of a new notion of a share, the AnyPrice share (APS), that is appropriate for settings with arbitrary entitlements. The AnyPrice share of an agent is the value she can guarantee to herself if she is given a budget equal to her entitlement, and she buys her highest value affordable set when items are adversarially priced with a total price equal to the total entitlements. Even for the equal entitlements case, this notion is new, and satisfies APS ≥ MMS, where the inequality is sometimes strict. We also present an alternative definition for the APS as a maximization problem (a fractional version of the MMS), and provide comparisons between the APS and previous notions of fairness. Our main result concerns additive valuations and arbitrary entitlements, for which we provide a polynomial-time algorithm that gives every agent at least a 3/5-fraction of her APS. This algorithm can also be viewed as providing a strategy in a certain natural bidding game, and this strategy secures each agent that uses it at least a 3/5-fraction of her APS, regardless of the strategies used by other agents.
We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a … We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a single additive buyer. We prove that optimal mechanisms are submodular and hence monotone. Furthermore, we show that in the IID case, optimal mechanisms are symmetric. Our characterizations are surprisingly non-trivial, and we show that they fail to extend in several natural ways, e.g. for correlated distributions or more than two items. In particular, this shows that the optimality of symmetric mechanisms does not follow from the symmetry of the IID distribution.
We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its … We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this paper we show how in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we study Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular. We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders -- in which the equilibrium allocation must be efficient -- when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. Our techniques reveal interesting connections between the LP relaxation of combinatorial auctions and local maxima. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.
We consider the problem of designing mechanisms that interact with strategic agents through strategic intermediaries (or mediators), and investigate the cost to society due to the mediators’ strategic behavior. Selfish … We consider the problem of designing mechanisms that interact with strategic agents through strategic intermediaries (or mediators), and investigate the cost to society due to the mediators’ strategic behavior. Selfish agents with private information are each associated with exactly one strategic mediator, and can interact with the mechanism exclusively through that mediator. Each mediator aims to optimize the combined utility of his agents, while the mechanism aims to optimize the combined utility of all agents. We focus on the problem of facility location on a metric induced by a publicly known tree. With nonstrategic mediators, there is a dominant strategy mechanism that is optimal. We show that when both agents and mediators act strategically, there is no dominant strategy mechanism that achieves any approximation. We, thus, slightly relax the incentive constraints, and define the notion of a two-sided incentive compatible mechanism. We show that the 3-competitive deterministic mechanism suggested by Procaccia and Tennenholtz [2013] and Dekel et al. [2010] for lines extends naturally to trees, and is still 3-competitive as well as two-sided incentive compatible. This is essentially the best possible (follows from Dekel et al. [2010] and Procaccia and Tennenholtz [2013]). We then show that by allowing randomization one can construct a 2-competitive randomized mechanism that is two-sided incentive compatible, and this is also essentially tight. This result also reduces a gap left in the work of Procaccia and Tennenholtz [2013] and Lu et al. [2009] for the problem of designing strategy-proof mechanisms for weighted agents with no mediators on a line. We also investigate a generalization of the preceding setting where there are multiple levels of mediators.
In many multiagent domains a set of agents exert effort towards a joint outcome, yet the individual effort levels cannot be easily observed. A typical example for such a scenario … In many multiagent domains a set of agents exert effort towards a joint outcome, yet the individual effort levels cannot be easily observed. A typical example for such a scenario is routing in communication networks, where the sender can only observe whether the packet reached its destination, but often has no information about the actions of the intermediate routers, which influences the final outcome. We study a setting where a principal needs to motivate a team of agents whose combination of hidden efforts stochastically determines an outcome. In a companion paper we devise and study a basic ''combinatorial agency'' model for this setting, where the principal is restricted to inducing a pure Nash equilibrium. Here we study various implications of this restriction. First, we show that, in contrast to the case of observable efforts, inducing a mixed-strategies equilibrium may be beneficial for the principal. Second, we present a sufficient condition for technologies for which no gain can be generated. Third, we bound the principal's gain for various families of technologies. Finally, we study the robustness of mixed equilibria to coalitional deviations and the computational hardness of the optimal mixed equilibria.
Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods [Foley'67, Varian'74]. Every agent receives an equal budget of artificial … Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods [Foley'67, Varian'74]. Every agent receives an equal budget of artificial currency with which to purchase goods, and prices match demand and supply. However, a CEEI is not guaranteed to exist when the goods are indivisible, even in the simple two-agent, single-item market. Yet, it is easy to see that once the two budgets are slightly perturbed (made generic), a competitive equilibrium does exist. In this paper we aim to extend this approach beyond the single-item case, and study the existence of equilibria in markets with two agents and additive preferences over multiple items. We show that for agents with equal budgets, making the budgets generic -- by adding vanishingly small random perturbations -- ensures the existence of an equilibrium. We further consider agents with arbitrary non-equal budgets, representing non-equal entitlements for goods. We show that competitive equilibrium guarantees a new notion of fairness among non-equal agents, and that it exists in cases of interest (like when the agents have identical preferences) if budgets are perturbed. Our results open opportunities for future research on generic equilibrium existence and fair treatment of non-equals.
Many large decentralized systems rely on information propagation to ensure their proper function. We examine a common scenario in which only participants that are aware of the information can compete … Many large decentralized systems rely on information propagation to ensure their proper function. We examine a common scenario in which only participants that are aware of the information can compete for some reward, and thus informed participants have an incentive not to propagate information to others. One recent example in which such tension arises is the 2009 DARPA Network Challenge (finding red balloons). We focus on another prominent example: Bitcoin, a decentralized electronic currency system. Bitcoin represents a radical new approach to monetary systems. It has been getting a large amount of public attention over the last year, both in policy discussions and in the popular press. Its cryptographic fundamentals have largely held up even as its usage has become increasingly widespread. We find, however, that it exhibits a fundamental problem of a different nature, based on how its incentives are structured. We propose a modification to the protocol that can eliminate this problem. Bitcoin relies on a peer-to-peer network to track transactions that are performed with the currency. For this purpose, every transaction a node learns about should be transmitted to its neighbors in the network. The current implemented protocol provides an incentive to nodes to not broadcast transactions they are aware of. Our solution is to augment the protocol with a scheme that rewards information propagation. Since clones are easy to create in the Bitcoin system, an important feature of our scheme is Sybil-proofness. We show that our proposed scheme succeeds in setting the correct incentives, that it is Sybil-proof, and that it requires only a small payment overhead, all this is achieved with iterated elimination of dominated strategies. We complement this result by showing that there are no reward schemes in which information propagation and no self-cloning is a dominant strategy.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Bulow-Klemperer-Style Results for Welfare Maximization in Two-Sided MarketsMoshe Babaioff, Kira Goldner, and Yannai A. GonczarowskiMoshe … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Bulow-Klemperer-Style Results for Welfare Maximization in Two-Sided MarketsMoshe Babaioff, Kira Goldner, and Yannai A. GonczarowskiMoshe Babaioff, Kira Goldner, and Yannai A. Gonczarowskipp.2452 - 2471Chapter DOI:https://doi.org/10.1137/1.9781611975994.150PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the problem of welfare (and gains-from-trade) maximization in two-sided markets using simple mechanisms that are prior-independent. The seminal impossibility result of Myerson and Satterthwaite [1983] shows that even for bilateral trade, there is no feasible (individually rational, truthful, and budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and even worse, it is known to be extremely complex, eluding a precise description. In this paper we present Bulow-Klemperer-style results to circumvent these hurdles in double-auction market settings. We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, it is deterministic, truthful, prior-independent, and anonymous). First, in the setting in which the values of the buyers and of the sellers are sampled independently and identically from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal-yet-infeasible VCG mechanism in the original market. We then move to a more general setting in which the values of the buyers are sampled from one distribution, and those of the sellers from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers' distribution. We present both upper bounds and lower bounds on the number of buyers that, when added, guarantees that BTR in the augmented market achieve welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms, and all of our positive and negative results extend to adding sellers instead of buyers. In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization (and more precisely, for gains-from-trade approximation) in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
In this paper we show that payment computation essentially does not present any obstacle in designing truthful mechanisms, even for multi-parameter domains, and even when we can only call the … In this paper we show that payment computation essentially does not present any obstacle in designing truthful mechanisms, even for multi-parameter domains, and even when we can only call the allocation rule once. We present a general reduction that takes any allocation rule which satisfies "cyclic monotonicity" (a known necessary and sufficient condition for truthfulness) and converts it to a truthful mechanism using a single call to the allocation rule, with arbitrarily small loss to the expected social welfare.
Complements between goods - where one good takes on added value in the presence of another - have been a thorn in the side of algorithmic mechanism designers. On the … Complements between goods - where one good takes on added value in the presence of another - have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are common in the standard motivating applications for combinatorial auctions, like spectrum license auctions. On the other, welfare maximization in the presence of complements is notoriously difficult, and this intractability has stymied theoretical progress in the area. For example, there are no known positive results for combinatorial auctions in which bidder valuations are multi-parameter and non-complement-free, other than the relatively weak results known for general valuations. To make inroads on the problem of combinatorial auction design in the presence of complements, we propose a model for valuations with complements that is parameterized by the of the complements. A valuation in our model is represented succinctly by a weighted hypergraph, where the size of the hyper-edges corresponds to degree of complementarity. Our model permits a variety of computationally efficient queries, and non-trivial welfare-maximization algorithms and mechanisms. We design the following polynomial-time approximation algorithms and truthful mechanisms for welfare maximization with bidders with hypergraph valuations. 1- For bidders whose valuations correspond to subgraphs of a known graph that is planar (or more generally, excludes a fixed minor), we give a truthful and (1+epsilon)-approximate mechanism. 2- We give a polynomial-time, r-approximation algorithm for welfare maximization with hypergraph-r valuations. Our algorithm randomly rounds a compact linear programming relaxation of the problem. 3- We design a different approximation algorithm and use it to give a polynomial-time, truthful-in-expectation mechanism that has an approximation factor of O(log^r m).
Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, … Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, convex) disutility for payments participate in mechanisms designed for quasi-linear agents. We first show that any mechanism that is truthful for quasi-linear buyers has a simple best response function for buyers with non-linear disutility from payments, in which each bidder simply scales down her value for each potential outcome by a fixed factor, equal to her target return on investment (ROI). We call such a strategy ROI-optimal. We prove the existence of a Nash equilibrium in which agents use ROI-optimal strategies for a general class of allocation problems. Motivated by online marketplaces, we then focus on simultaneous second-price auctions for additive bidders and show that all ROI-optimal equilibria in this setting achieve constant-factor approximations to suitable welfare and revenue benchmarks.
In various markets where sellers compete in price, price oscillations are observed rather than convergence to equilibrium. Such fluctuations have been empirically observed in the retail market for gasoline, in … In various markets where sellers compete in price, price oscillations are observed rather than convergence to equilibrium. Such fluctuations have been empirically observed in the retail market for gasoline, in airline pricing and in the online sale of consumer goods. Motivated by this, we study a model of price competition in which equilibria rarely exist. We seek to analyze the welfare, despite the nonexistence of equilibria, and present welfare guarantees as a function of the market power of the sellers. We first study best response dynamics in markets with sellers that provide a homogeneous good, and show that except for a modest number of initial rounds, the welfare is guaranteed to be high. We consider two variations: in the first the sellers have full information about the buyer's valuation. Here we show that if there are n items available across all sellers and nmax is the maximum number of items controlled by any given seller, then the ratio of the optimal welfare to the achieved welfare will be at most log n/(n-nmax + 1))+1. As the market power of the largest seller diminishes, the welfare becomes closer to optimal. In the second variation we consider an extended model in which sellers have uncertainty about the buyer's valuation. Here we similarly show that the welfare improves as the market power of the larger seller decreases, yet with a worse ratio of n/(n-nmax + 1). Our welfare bounds in both cases are essentially tight. The exponential gap in welfare between the two variations quantifies the value of accurately learning the buyer's valuation in such settings.
Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods [Foley'67, Varian'74]. Every agent receives an equal budget of artificial … Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods [Foley'67, Varian'74]. Every agent receives an equal budget of artificial currency with which to purchase goods, and prices match demand and supply. However, a CEEI is not guaranteed to exist when the goods are indivisible, even in the simple two-agent, single-item market. Yet, it is easy to see that once the two budgets are slightly perturbed (made generic), a competitive equilibrium does exist. In this paper we aim to extend this approach beyond the single-item case, and study the existence of equilibria in markets with two agents and additive preferences over multiple items. We show that for agents with equal budgets, making the budgets generic -- by adding vanishingly small random perturbations -- ensures the existence of an equilibrium. We further consider agents with arbitrary non-equal budgets, representing non-equal entitlements for goods. We show that competitive equilibrium guarantees a new notion of fairness among non-equal agents, and that it exists in cases of interest (like when the agents have identical preferences) if budgets are perturbed. Our results open opportunities for future research on generic equilibrium existence and fair treatment of non-equals.
We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked … We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the bandit problem, and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by regret, the difference in social welfare between the algorithm and the benchmark which always selects the same best advertisement. We investigate how the design of multi-armed bandit algorithms is affected by the restriction that the resulting mechanism must be truthful. We find that truthful mechanisms have certain strong structural properties -- essentially, they must separate exploration from exploitation -- and they incur much higher regret than the optimal multi-armed bandit algorithms. Moreover, we provide a truthful mechanism which (essentially) matches our lower bound on regret.
We consider a single buyer with a combinatorial preference that would like to purchase related products and services from different vendors, where each vendor supplies exactly one product. We study … We consider a single buyer with a combinatorial preference that would like to purchase related products and services from different vendors, where each vendor supplies exactly one product. We study the general case where subsets of products can be as well as complementary and analyze the game that is induced on the vendors, where a vendor's strategy is the price that he asks for his product. This model generalizes both Bertrand competition (where vendors are perfect substitutes) and Nash bargaining (where they are perfect complements), and captures a wide variety of scenarios that can appear in complex crowd sourcing or in automatic pricing of related products. We study the equilibria of such games and show that a pure efficient equilibrium always exists. In the case of submodular buyer preferences we fully characterize the set of pure Nash equilibria, essentially showing uniqueness. For the even more restricted substitutes buyer preferences we also prove uniqueness over {\em mixed} equilibria. Finally we begin the exploration of natural generalizations of our setting such as when services have costs, when there are multiple buyers or uncertainty about the the buyer's valuation, and when a single vendor supplies multiple products.
We consider fair allocation of a set M of indivisible goods to n equally-entitled agents, with no monetary transfers. Every agent i has a valuation function vi from some given … We consider fair allocation of a set M of indivisible goods to n equally-entitled agents, with no monetary transfers. Every agent i has a valuation function vi from some given class of valuation functions. A share s is a function that maps a pair (vi,n) to a non-negative value, with the interpretation that if an allocation of M to n agents fails to give agent i a bundle of value at least equal to s(vi,n), this serves as evidence that the allocation is not fair towards i. For such an interpretation to make sense, we would like the share to be feasible, meaning that for any valuations in the class, there is an allocation that gives every agent at least her share. The maximin share (MMS) was a natural candidate for a feasible share for additive valuations. However, Kurokawa, Procaccia and Wang [2018] show that it is not feasible.
We consider a price competition between two sellers of perfect-complement goods. Each seller posts a price for the good it sells, but the demand is determined according to the sum … We consider a price competition between two sellers of perfect-complement goods. Each seller posts a price for the good it sells, but the demand is determined according to the sum of prices. This is a classic model by Cournot (1838), who showed that in this setting a monopoly that sells both goods is better for the society than two competing sellers. We show that non-trivial pure Nash equilibria always exist in this game. We also quantify Cournot's observation with respect to both the optimal welfare and the monopoly revenue. We then prove a series of mostly negative results regarding the convergence of best response dynamics to equilibria in such games.
We consider the problem of designing mechanisms that interact with strategic agents through strategic intermediaries (or mediators), and investigate the cost to society due to the mediators' strategic behavior. Selfish … We consider the problem of designing mechanisms that interact with strategic agents through strategic intermediaries (or mediators), and investigate the cost to society due to the mediators' strategic behavior. Selfish agents with private information are each associated with exactly one strategic mediator, and can interact with the mechanism exclusively through that mediator. Each mediator aims to optimize the combined utility of his agents, while the mechanism aims to optimize the combined utility of all agents. We focus on the problem of facility location on a metric induced by a publicly known tree. With non-strategic mediators, there is a dominant strategy mechanism that is optimal. We show that when both agents and mediators act strategically, there is no dominant strategy mechanism that achieves any approximation. We, thus, slightly relax the incentive constraints, and define the notion of a two-sided incentive compatible mechanism. We show that the 3-competitive deterministic mechanism suggested by Procaccia and Tennenholtz (2009) and Dekel et al. (2010) for lines extends naturally to trees, and is still 3-competitive as well as two-sided incentive compatible. This is essentially the best possible. We then show that by allowing randomization one can construct a 2-competitive randomized mechanism that is two-sided incentive compatible, and this is also essentially tight. This result also closes a gap left in the work of Procaccia and Tennenholtz (2009) and Lu et al. (2009) for the simpler problem of designing strategy-proof mechanisms for weighted agents with no mediators on a line, while extending to the more general model of trees. We also investigate a further generalization of the above setting where there are multiple levels of mediators.
The seminal impossibility result of Myerson and Satterthwaite (1983) states that for bilateral trade, there is no mechanism that is individually rational (IR), incentive compatible (IC), weakly budget balanced, and … The seminal impossibility result of Myerson and Satterthwaite (1983) states that for bilateral trade, there is no mechanism that is individually rational (IR), incentive compatible (IC), weakly budget balanced, and efficient. This has led follow-up work on two-sided trade settings to weaken the efficiency requirement and consider approximately efficient simple mechanisms, while still demanding the other properties. The current state-of-the-art of such mechanisms for two-sided markets can be categorized as giving one (but not both) of the following two types of approximation guarantees on the gains from trade: a constant ex-ante guarantee, measured with respect to the second-best efficiency benchmark, or an asymptotically optimal ex-post guarantee, measured with respect to the first-best efficiency benchmark. Here the second-best efficiency benchmark refers to the highest gains from trade attainable by any IR, IC and weakly budget balanced mechanism, while the first-best efficiency benchmark refers to the maximum gains from trade (attainable by the VCG mechanism, which is not weakly budget balanced). In this paper, we construct simple mechanisms for double-auction and matching markets that simultaneously achieve both types of guarantees: these are ex-post IR, Bayesian IC, and ex-post weakly budget balanced mechanisms that 1) ex-ante guarantee a constant fraction of the gains from trade of the second-best, and 2) ex-post guarantee a realization-dependent fraction of the gains from trade of the first-best, such that this realization-dependent fraction converges to 1 (full efficiency) as the market grows large.
We study competitive equilibrium in the canonical Fisher market model, but with indivisible goods. In this model, every agent has a budget of artificial currency with which to purchase bundles … We study competitive equilibrium in the canonical Fisher market model, but with indivisible goods. In this model, every agent has a budget of artificial currency with which to purchase bundles of goods. Equilibrium prices match between demand and supply---at such prices, all agents simultaneously get their favorite within-budget bundle, and the market clears. Unfortunately, a competitive equilibrium may not exist when the goods are indivisible, even in extremely simple markets such as two agents with exactly the same budget and a single item. Yet in this example, once the budgets are slightly perturbed---i.e., made generic---a competitive equilibrium is guaranteed to exist. In this paper we explore the extent to which generic budgets can guarantee equilibrium existence (and thus related fairness guarantees) in markets with multiple items. We complement our results in [Babaioff et al., 2019] for additive preferences by exploring the case of general monotone preferences, establishing positive results for small numbers of items and mapping the limits of our approach. We then consider cardinal preferences, define a hierarchy of such preference classes and establish relations among them, and for some classes prove equilibrium existence under generic budgets.
We consider a network of sellers, each selling a single product, where the graph structure represents pair-wise complementarities between products. We study how the network structure affects revenue and social … We consider a network of sellers, each selling a single product, where the graph structure represents pair-wise complementarities between products. We study how the network structure affects revenue and social welfare of equilibria of the pricing game between the sellers. We prove positive and negative results, both of Price of Anarchy and of Price of Stability type, for special families of graphs (paths, cycles) as well as more general ones (trees, graphs). We describe best-reply dynamics that converge to non-trivial equilibrium in several families of graphs, and we use these dynamics to prove the existence of approximately-efficient equilibria.
Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report … Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report their true demand in response to prices. A line of research in economics, initiated by Hurwicz (1972), is devoted to understanding how such markets perform when agents are strategic about their demands. This is captured by the \emph{Walrasian Mechanism} that proceeds by collecting reported demands, finding clearing prices in the \emph{reported} market via an ascending price t\^{a}tonnement procedure, and returns the resulting allocation. Similar mechanisms are used, for example, in the daily opening of the New York Stock Exchange and the call market for copper and gold in London. In practice, it is commonly observed that agents in such markets reduce their demand leading to behaviors resembling bargaining and to inefficient outcomes. We ask how inefficient the equilibria can be. Our main result is that the welfare of every pure Nash equilibrium of the Walrasian mechanism is at least one quarter of the optimal welfare, when players have gross substitute valuations and do not overbid. Previous analysis of the Walrasian mechanism have resorted to large market assumptions to show convergence to efficiency in the limit. Our result shows that approximate efficiency is guaranteed regardless of the size of the market.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell n items to a single unit-demand buyer. In this robust version, the seller … We consider a robust version of the revenue maximization problem, where a single seller wishes to sell n items to a single unit-demand buyer. In this robust version, the seller knows the buyer's marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case [Carroll, 2017], we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than n1/2-ε. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
We consider the problem of fair allocation of indivisible goods to n agents with no transfers. When agents have equal entitlements, the well-established notion of the maximin share (MMS) serves … We consider the problem of fair allocation of indivisible goods to n agents with no transfers. When agents have equal entitlements, the well-established notion of the maximin share (MMS) serves as an attractive fairness criterion for which, to qualify as fair, an allocation needs to give every agent at least a substantial fraction of the agent’s MMS. In this paper, we consider the case of arbitrary (unequal) entitlements. We explain shortcomings in previous attempts that extend the MMS to unequal entitlements. Our conceptual contribution is the introduction of a new notion of a share, the AnyPrice share (APS), that is appropriate for settings with arbitrary entitlements. Even for the equal entitlements case, this notion is new and satisfies [Formula: see text], for which the inequality is sometimes strict. We present two equivalent definitions for the APS (one as a minimization problem, the other as a maximization problem) and provide comparisons between the APS and previous notions of fairness. Our main result concerns additive valuations and arbitrary entitlements, for which we provide a polynomial-time algorithm that gives every agent at least a [Formula: see text] - fraction of the agent’s APS. This algorithm can also be viewed as providing strategies in a certain natural bidding game, and these strategies secure each agent at least a [Formula: see text] - fraction of the agent’s APS. Funding: T. Ezra’s research is partially supported by the European Research Council Advanced [Grant 788893] AMDROMA “Algorithmic and Mechanism Design Research in Online Markets” and MIUR PRIN project ALGADIMAR “Algorithms, Games, and Digital Markets.” U. Feige’s research is supported in part by the Israel Science Foundation [Grant 1122/22].
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations. To gauge the robustness of our positive results, we also study $\epsilon$-dichotomous valuations, in which the added value of any item to a set is either non-positive, or in the range $[1, 1 + \epsilon]$. We show several impossibility results in this setting, and also a positive result: for players that have additive $\epsilon$-dichotomous valuations with sufficiently small $\epsilon$, we design a randomized truthful mechanism with strong ex-post guarantees. For $\rho = \frac{1}{1 + \epsilon}$, the allocations that it produces generate at least a $\rho$-fraction of the maximum welfare, and enjoy $\rho$-approximations for various fairness properties, such as being envy-free up to one item (EF1), and giving each player at least her maximin share.
We consider the problem of allocating heterogeneous and indivisible goods among strategic agents, with preferences over subsets of goods, when there is no medium of exchange. This model captures the … We consider the problem of allocating heterogeneous and indivisible goods among strategic agents, with preferences over subsets of goods, when there is no medium of exchange. This model captures the well studied problem of fair allocation of indivisible goods. Serial-quota mechanisms are allocation mechanisms where there is a predefined order over agents, and each agent in her turn picks a predefined number of goods from the remaining goods. These mechanisms are clearly strategy-proof, non-bossy, and neutral. Are there other mechanisms with these properties? We show that for important classes of strict ordinal preferences (as lexicographic preferences, and as the class of all strict preferences), these are the only mechanisms with these properties. Importantly, unlike previous work, we can prove the claim even for mechanisms that are not Pareto-efficient. Moreover, we generalize these results to preferences that are cardinal, including any valuation class that contains additive valuations. We then derive strong negative implications of this result on truthful mechanisms for fair allocation of indivisible goods to agents with additive valuations.
We consider fair allocation of indivisible goods to n equally entitled agents. Every agent i has a valuation function v i from some given class of valuation functions. A share … We consider fair allocation of indivisible goods to n equally entitled agents. Every agent i has a valuation function v i from some given class of valuation functions. A share s is a function that maps [Formula: see text] to a nonnegative value. A share is feasible if for every allocation instance, there is an allocation that gives every agent i a bundle that is acceptable with respect to v i , one of value at least her share value [Formula: see text]. We introduce the following concepts. A share is self-maximizing if reporting the true valuation maximizes the minimum true value of a bundle that is acceptable with respect to the report. A share s ρ-dominates another share [Formula: see text] if [Formula: see text] for every valuation function. We initiate a systematic study of feasible and self-maximizing shares and a systematic study of ρ-domination relation between shares, presenting both positive and negative results. Funding: The research of M. Babaioff is supported in part by a Golda Meir Fellowship. The research of U. Feige is supported in part by the Israel Science Foundation [Grant 1122/22].
We consider a principal seller with $m$ heterogeneous products to sell to an additive buyer over independent items. The principal can offer an arbitrary menu of product bundles, but faces … We consider a principal seller with $m$ heterogeneous products to sell to an additive buyer over independent items. The principal can offer an arbitrary menu of product bundles, but faces competition from smaller and more agile single-item sellers. The single-item sellers choose their prices after the principal commits to a menu, potentially under-cutting the principal's offerings. We explore to what extent the principal can leverage the ability to bundle product together to extract revenue. Any choice of menu by the principal induces an oligopoly pricing game between the single-item sellers, which may have multiple equilibria. When there is only a single item this model reduces to Bertrand competition, for which the principal's revenue is $0$ at any equilibrium, so we assume that no single item's value is too dominant. We establish an upper bound on the principal's optimal revenue at every equilibrium: the expected welfare after truncating each item's value to its revenue-maximizing price. Under a technical condition on the value distributions -- that the monopolist's revenue is sufficiently sensitive to price -- we show that the principal seller can simply price the grand-bundle and ensure (in any equilibrium) a constant approximation to this bound (and hence to the optimal revenue). We also show that for some value distributions violating our conditions, grand-bundle pricing does not yield a constant approximation to the optimal revenue in any equilibrium.
We consider the problem of fair allocation of indivisible items to agents that have arbitrary entitlements to the items. Every agent $i$ has a valuation function $v_i$ and an entitlement … We consider the problem of fair allocation of indivisible items to agents that have arbitrary entitlements to the items. Every agent $i$ has a valuation function $v_i$ and an entitlement $b_i$, where entitlements sum up to~1. Which allocation should one choose in situations in which agents fail to agree on one acceptable fairness notion? We study this problem in the case in which each agent focuses on the value she gets, and fairness notions are restricted to be {\em share based}. A {\em share} $s$ is an function that maps every $(v_i,b_i)$ to a value $s(v_i,b_i)$, representing the minimal value $i$ should get, and $s$ is {\em feasible} if it is always possible to give every agent $i$ value of at least $s(v_i,b_i)$. Our main result is that for additive valuations over goods there is an allocation that gives every agent at least half her share value, regardless of which feasible share-based fairness notion the agent wishes to use. Moreover, the ratio of half is best possible. More generally, we provide tight characterizations of what can be achieved, both ex-post (as single allocations) and ex-ante (as expected values of distributions of allocations), both for goods and for chores. We also show that for chores one can achieve the ex-ante and ex-post guarantees simultaneously (a ``best of both world" result), whereas for goods one cannot.
We study the problem of designing a two-sided market (double auction) to maximize the gains from trade (social welfare) under the constraints of (dominant-strategy) incentive compatibility and budget-balance. Our goal … We study the problem of designing a two-sided market (double auction) to maximize the gains from trade (social welfare) under the constraints of (dominant-strategy) incentive compatibility and budget-balance. Our goal is to do so for an unknown distribution from which we are given a polynomial number of samples. Our first result is a general impossibility for the case of correlated distributions of values even between just one seller and two buyers, in contrast to the case of one seller and one buyer (bilateral trade) where this is possible. Our second result is an efficient learning algorithm for one seller and two buyers in the case of independent distributions which is based on a novel algorithm for computing optimal mechanisms for finitely supported and explicitly given independent distributions. Both results rely heavily on characterizations of (dominant-strategy) incentive compatible mechanisms that are strongly budget-balanced.
We consider the problem of fair allocation of indivisible goods to n agents with no transfers. When agents have equal entitlements, the well-established notion of the maximin share (MMS) serves … We consider the problem of fair allocation of indivisible goods to n agents with no transfers. When agents have equal entitlements, the well-established notion of the maximin share (MMS) serves as an attractive fairness criterion for which, to qualify as fair, an allocation needs to give every agent at least a substantial fraction of the agent’s MMS. In this paper, we consider the case of arbitrary (unequal) entitlements. We explain shortcomings in previous attempts that extend the MMS to unequal entitlements. Our conceptual contribution is the introduction of a new notion of a share, the AnyPrice share (APS), that is appropriate for settings with arbitrary entitlements. Even for the equal entitlements case, this notion is new and satisfies [Formula: see text], for which the inequality is sometimes strict. We present two equivalent definitions for the APS (one as a minimization problem, the other as a maximization problem) and provide comparisons between the APS and previous notions of fairness. Our main result concerns additive valuations and arbitrary entitlements, for which we provide a polynomial-time algorithm that gives every agent at least a [Formula: see text] - fraction of the agent’s APS. This algorithm can also be viewed as providing strategies in a certain natural bidding game, and these strategies secure each agent at least a [Formula: see text] - fraction of the agent’s APS. Funding: T. Ezra’s research is partially supported by the European Research Council Advanced [Grant 788893] AMDROMA “Algorithmic and Mechanism Design Research in Online Markets” and MIUR PRIN project ALGADIMAR “Algorithms, Games, and Digital Markets.” U. Feige’s research is supported in part by the Israel Science Foundation [Grant 1122/22].
In this paper we revisit the notion of simplicity in mechanisms. We consider a seller of m heterogeneous items, facing a single buyer with valuation v. We observe that previous … In this paper we revisit the notion of simplicity in mechanisms. We consider a seller of m heterogeneous items, facing a single buyer with valuation v. We observe that previous attempts to define complexity measures often fail to classify mechanisms that are intuitively considered simple (e.g., the "selling separately" mechanism) as such. We suggest to view a menu as simple if a bundle that maximizes the buyer's profit can be found by conducting a few primitive operations that are considered simple. The primitive complexity of a menu is the number of primitive operations needed to (adaptively) find a profit-maximizing entry in the menu. In this paper, the primitive operation that we study is essentially computing the outcome of the "selling separately" mechanism.
We explore the performance of polynomial-time incentive-compatible mechanisms in single-crossing domains. Single-crossing domains were extensively studied in the economics literature. Roughly speaking, a domain is single crossing if monotonicity characterizes … We explore the performance of polynomial-time incentive-compatible mechanisms in single-crossing domains. Single-crossing domains were extensively studied in the economics literature. Roughly speaking, a domain is single crossing if monotonicity characterizes incentive compatibility (intuitively, an algorithm is monotone if a bidder that "improves" his valuation is allocated a better outcome). That is, single-crossing domains are the standard mathematical formulation of domains that are informally known as "single parameter". In all major single-crossing domains studied so far (e.g., welfare maximization in various auctions with single-minded bidders, makespan minimization on related machines), the performance of the best polynomial-time incentive-compatible mechanisms matches the performance of the best polynomial-time non-incentive-compatible algorithms. Our two main results make progress in understanding the power of incentive-compatible polynomial-time mechanisms in single-crossing domains:
We explore the performance of polynomial-time incentive-compatible mechanisms in single-crossing domains. Single-crossing domains were extensively studied in the economics literature. Roughly speaking, a domain is single crossing if monotonicity characterizes … We explore the performance of polynomial-time incentive-compatible mechanisms in single-crossing domains. Single-crossing domains were extensively studied in the economics literature. Roughly speaking, a domain is single crossing if monotonicity characterizes incentive compatibility. That is, single-crossing domains are the standard mathematical formulation of domains that are informally known as ``single parameter''. In all major single-crossing domains studied so far (e.g., welfare maximization in various auctions with single-minded bidders, makespan minimization on related machines), the performance of the best polynomial-time incentive-compatible mechanisms matches the performance of the best polynomial-time non-incentive-compatible algorithms. Our two main results make progress in understanding the power of incentive-compatible polynomial-time mechanisms in single-crossing domains: We provide the first proof of a gap in the power of polynomial-time incentive-compatible mechanisms and polynomial-time non-incentive-compatible algorithms: we present an objective function in a single-crossing multi-unit auction for which there is a polynomial-time algorithm that provides an approximation ratio of $\frac{1}{2}$, yet no polynomial-time incentive-compatible mechanism provides a finite approximation (under standard computational complexity assumptions). The objective function used above is not natural. We show that to some extent this is unavoidable by providing a sweeping positive result for the most natural objective function in multi-unit auctions, that of welfare maximization. We present an incentive-compatible FPTAS mechanism for every multi-unit auction with single-crossing domains. This improves over the mechanism of Briest et al. [STOC'05] that only applies to the much simpler case of single-minded bidders.
We consider fair allocation of a set M of indivisible goods to n equally-entitled agents, with no monetary transfers. Every agent i has a valuation function vi from some given … We consider fair allocation of a set M of indivisible goods to n equally-entitled agents, with no monetary transfers. Every agent i has a valuation function vi from some given class of valuation functions. A share s is a function that maps a pair (vi,n) to a non-negative value, with the interpretation that if an allocation of M to n agents fails to give agent i a bundle of value at least equal to s(vi,n), this serves as evidence that the allocation is not fair towards i. For such an interpretation to make sense, we would like the share to be feasible, meaning that for any valuations in the class, there is an allocation that gives every agent at least her share. The maximin share (MMS) was a natural candidate for a feasible share for additive valuations. However, Kurokawa, Procaccia and Wang [2018] show that it is not feasible.
We study a market of investments on networks, where each agent (vertex) can invest in any enterprise linked to her, and at the same time, raise capital for her firm's … We study a market of investments on networks, where each agent (vertex) can invest in any enterprise linked to her, and at the same time, raise capital for her firm's enterprise from other agents she is linked to. Failing to raise sufficient capital results with the firm defaulting, being unable to invest in others. Our main objective is to examine the role of collateral contracts in handling the strategic risk that can propagate to a systemic risk throughout the network in a cascade of defaults. We take a mechanism-design approach and solve for the optimal scheme of collateral contracts that capital raisers offer their investors. These contracts aim at sustaining the efficient level of investment as a unique Nash equilibrium, while minimizing the total collateral.
Cloud computing customers often submit repeating jobs and computation pipelines on approximately regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine … Cloud computing customers often submit repeating jobs and computation pipelines on approximately regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine learning, allows customers to partially predict future job requirements. We develop a model of cloud computing platforms that receive statements of work (SoWs) in an online fashion. The SoWs describe future jobs whose arrival times and durations are probabilistic, and whose utility to the submitting agents declines with completion time. The arrival and duration distributions, as well as the utility functions, are considered private customer information and are reported by strategic agents to a scheduler that is optimizing for social welfare.
Cloud computing customers often submit repeating jobs and computation pipelines on \emph{approximately} regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine … Cloud computing customers often submit repeating jobs and computation pipelines on \emph{approximately} regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine learning, allows customers to partially predict future job requirements. We develop a model of cloud computing platforms that receive statements of work (SoWs) in an online fashion. The SoWs describe future jobs whose arrival times and durations are probabilistic, and whose utility to the submitting agents declines with completion time. The arrival and duration distributions, as well as the utility functions, are considered private customer information and are reported by strategic agents to a scheduler that is optimizing for social welfare. We design pricing, scheduling, and eviction mechanisms that incentivize truthful reporting of SoWs. An important challenge is maintaining incentives despite the possibility of the platform becoming saturated. We introduce a framework to reduce scheduling under uncertainty to a relaxed scheduling problem without uncertainty. Using this framework, we tackle both adversarial and stochastic submissions of statements of work, and obtain logarithmic and constant competitive mechanisms, respectively.
We consider fair allocation of a set $M$ of indivisible goods to $n$ equally-entitled agents, with no monetary transfers. Every agent $i$ has a valuation $v_i$ from some given class … We consider fair allocation of a set $M$ of indivisible goods to $n$ equally-entitled agents, with no monetary transfers. Every agent $i$ has a valuation $v_i$ from some given class of valuation functions. A share $s$ is a function that maps a pair $(v_i,n)$ to a value, with the interpretation that if an allocation of $M$ to $n$ agents fails to give agent $i$ a bundle of value at least equal to $s(v_i,n)$, this serves as evidence that the allocation is not fair towards $i$. For such an interpretation to make sense, we would like the share to be feasible, meaning that for any valuations in the class, there is an allocation that gives every agent at least her share. The maximin share was a natural candidate for a feasible share for additive valuations. However, Kurokawa, Procaccia and Wang [2018] show that it is not feasible. We initiate a systematic study of the family of feasible shares. We say that a share is \emph{self maximizing} if truth-telling maximizes the implied guarantee. We show that every feasible share is dominated by some self-maximizing and feasible share. We seek to identify those self-maximizing feasible shares that are polynomial time computable, and offer the highest share values. We show that a SM-dominating feasible share -- one that dominates every self-maximizing (SM) feasible share -- does not exist for additive valuations (and beyond). Consequently, we relax the domination property to that of domination up to a multiplicative factor of $\rho$ (called $\rho$-dominating). For additive valuations we present shares that are feasible, self-maximizing and polynomial-time computable. For $n$ agents we present such a share that is $\frac{2n}{3n-1}$-dominating. For two agents we present such a share that is $(1 - \epsilon)$-dominating. Moreover, for these shares we present poly-time algorithms that compute allocations that give every agent at least her share.
In this paper we revisit the notion of simplicity in mechanisms. We consider a seller of $m$ items, facing a single buyer with valuation $v$. We observe that previous attempts … In this paper we revisit the notion of simplicity in mechanisms. We consider a seller of $m$ items, facing a single buyer with valuation $v$. We observe that previous attempts to define complexity measures often fail to classify mechanisms that are intuitively considered simple (e.g., the "selling separately" mechanism) as such. We suggest to view a menu as simple if a bundle that maximizes the buyer's profit can be found by conducting a few primitive operations that are considered simple. The \emph{primitive complexity of a menu} is the number of primitive operations needed to (adaptively) find a profit-maximizing entry in the menu. In this paper, the primitive operation that we study is essentially computing the outcome of the "selling separately" mechanism. Does the primitive complexity capture the simplicity of other auctions that are intuitively simple? We consider \emph{bundle-size pricing}, a common pricing method in which the price of a bundle depends only on its size. Our main technical contribution is determining the primitive complexity of bundle-size pricing menus in various settings. We show that for any distribution $\cal D$ over weighted matroid rank valuations, even distributions with arbitrary correlation among their values, there is always a bundle-size pricing menu with low primitive complexity that achieves almost the same revenue as the optimal bundle-size pricing menu. As part of this proof we provide a randomized algorithm that for any weighted matroid rank valuation $v$ and integer $k$, finds the most valuable set of size $k$ with only a poly-logarithmic number of demand and value queries. We show that this result is essentially tight in several aspects. For example, if the valuation $v$ is submodular, then finding the most valuable set of size $k$ requires exponentially many queries (this solves an open question of Badanidiyuru et al. [EC'12]).
We study the classic bilateral trade setting. Myerson and Satterthwaite show that there is no Bayesian incentive compatible and budget-balanced mechanism that obtains the gains from trade of the first-best … We study the classic bilateral trade setting. Myerson and Satterthwaite show that there is no Bayesian incentive compatible and budget-balanced mechanism that obtains the gains from trade of the first-best mechanism. Consider the random-offerer mechanism: with probability $\frac{1}{2}$ run the \emph{seller-offering} mechanism, in which the seller offers the buyer a take-it-or-leave-it price that maximizes the expected profit of the seller, and with probability $\frac{1}{2}$ run the \emph{buyer-offering} mechanism. Very recently, Deng, Mao, Sivan, and Wang showed that the gains from trade of the random-offerer mechanism is at least a constant factor of $\frac 1 {8.23}\approx 0.121$ of the gains from trade of the first best mechanism. Perhaps a natural conjecture is that the gains-from-trade of the random-offerer mechanism, which is known to be at least half of the gains-from-trade of the second-best mechanism, is also at least half of the gains-from-trade of the first-best mechanism. However, in this note we exhibit distributions such as the gains-from trade of the random-offerer mechanism is smaller than a $0.495$-fraction of the gains-from-trade of the first-best mechanism.
In the early $20^{th}$ century, Pigou observed that imposing a marginal cost tax on the usage of a public good induces a socially efficient level of use as an equilibrium. … In the early $20^{th}$ century, Pigou observed that imposing a marginal cost tax on the usage of a public good induces a socially efficient level of use as an equilibrium. Unfortunately, such a tax may also induce other, socially inefficient, equilibria. We observe that this social inefficiency may be unbounded, and study whether alternative tax structures may lead to milder losses in the worst case, i.e. to a lower price of anarchy. We show that no tax structure leads to bounded losses in the worst case. However, we do find a tax scheme that has a lower price of anarchy than the Pigouvian tax, obtaining tight lower and upper bounds in terms of a crucial parameter that we identify. We generalize our results to various scenarios that each offers an alternative to the use of a public road by private cars, such as ride sharing, or using a bus or a train.
We consider the problem of fair allocation of indivisible goods to n agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) … We consider the problem of fair allocation of indivisible goods to n agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) serves as an attractive fairness criterion, where to qualify as fair, an allocation needs to give every agent at least a substantial fraction of her MMS. In this paper we consider the case of arbitrary (unequal) entitlements. We explain shortcomings in previous attempts that extend the MMS to unequal entitlements. Our conceptual contribution is the introduction of a new notion of a share, the AnyPrice share (APS), that is appropriate for settings with arbitrary entitlements. The AnyPrice share of an agent is the value she can guarantee to herself if she is given a budget equal to her entitlement, and she buys her highest value affordable set when items are adversarially priced with a total price equal to the total entitlements. Even for the equal entitlements case, this notion is new, and satisfies APS ≥ MMS, where the inequality is sometimes strict. We also present an alternative definition for the APS as a maximization problem (a fractional version of the MMS), and provide comparisons between the APS and previous notions of fairness. Our main result concerns additive valuations and arbitrary entitlements, for which we provide a polynomial-time algorithm that gives every agent at least a 3/5-fraction of her APS. This algorithm can also be viewed as providing a strategy in a certain natural bidding game, and this strategy secures each agent that uses it at least a 3/5-fraction of her APS, regardless of the strategies used by other agents.
A common assumption in auction theory is that the information available to the agents is given exogenously and that the auctioneer has full control over the market. In practice, agents … A common assumption in auction theory is that the information available to the agents is given exogenously and that the auctioneer has full control over the market. In practice, agents might be able to acquire information about their competitors before the auction (by exerting some costly effort), and might be able to resell acquired items in an aftermarket. The auctioneer has no control over those aspects, yet their existence influences agents' strategic behavior and the overall equilibrium welfare can strictly decrease as a result. We show that if an auction is smooth (e.g., first-price auction, all-pay auction), then the corresponding price of anarchy bound due to smoothness continues to hold in any environment with (a) information acquisition on opponents' valuations, and/or (b) an aftermarket satisfying two mild conditions (voluntary participation and weak budget balance). We also consider the special case with two ex ante symmetric bidders, where the first-price auction is known to be efficient in isolation. We show that information acquisition can lead to efficiency loss in this environment, but aftermarkets do not: any equilibrium of a first-price or all-pay auction combined with an aftermarket is still efficient.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations.
Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods (Foley 1967, Varian 1974). Every agent receives an equal budget … Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods (Foley 1967, Varian 1974). Every agent receives an equal budget of artificial currency with which to purchase goods, and prices match demand and supply. However, a CEEI is not guaranteed to exist when the goods are indivisible even in the simple two-agent, single-item market. Yet it is easy to see that, once the two budgets are slightly perturbed (made generic), a competitive equilibrium does exist. In this paper, we aim to extend this approach beyond the single-item case and study the existence of equilibria in markets with two agents and additive preferences over multiple items. We show that, for agents with equal budgets, making the budgets generic—by adding vanishingly small random perturbations—ensures the existence of equilibrium. We further consider agents with arbitrary nonequal budgets, representing nonequal entitlements for goods. We show that competitive equilibrium guarantees a new notion of fairness among nonequal agents and that it exists in cases of interest (such as when the agents have identical preferences) if budgets are perturbed. Our results open opportunities for future research on generic equilibrium existence and fair treatment of nonequals.
We consider the problem of fair allocation of indivisible goods to $n$ agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) … We consider the problem of fair allocation of indivisible goods to $n$ agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) serves as an attractive fairness criterion, where to qualify as fair, an allocation needs to give every agent at least a substantial fraction of her MMS. In this paper we consider the case of arbitrary (unequal) entitlements. We explain shortcomings in previous attempts that extend the MMS to unequal entitlements. Our conceptual contribution is the introduction of a new notion of a share, the AnyPrice share (APS), that is appropriate for settings with arbitrary entitlements. Even for the equal entitlements case, this notion is new, and satisfies $APS \ge MMS$, where the inequality is sometimes strict. We present two equivalent definitions for the APS (one as a minimization problem, the other as a maximization problem), and provide comparisons between the APS and previous notions of fairness. Our main result concerns additive valuations and arbitrary entitlements, for which we provide a polynomial-time algorithm that gives every agent at least a $\frac{3}{5}$-fraction of her APS. This algorithm can also be viewed as providing strategies in a certain natural bidding game, and these strategies secure each agent at least a $\frac{3}{5}$-fraction of her APS.
We consider the problem of fair allocation of indivisible items among $n$ agents with additive valuations, when agents have equal entitlements to the goods, and there are no transfers. Best-of-Both-Worlds … We consider the problem of fair allocation of indivisible items among $n$ agents with additive valuations, when agents have equal entitlements to the goods, and there are no transfers. Best-of-Both-Worlds (BoBW) fairness mechanisms aim to give all agents both an ex-ante guarantee (such as getting the proportional share in expectation) and an ex-post guarantee. Prior BoBW results have focused on ex-post guarantees that are based on the "up to one item" paradigm, such as envy-free up to one item (EF1). In this work we attempt to give every agent a high value ex-post, and specifically, a constant fraction of his maximin share (MMS). The up to one item paradigm fails to give such a guarantee, and it is not difficult to present examples in which previous BoBW mechanisms give agents only a $\frac{1}{n}$ fraction of their MMS. Our main result is a deterministic polynomial time algorithm that computes a distribution over allocations that is ex-ante proportional, and ex-post, every allocation gives every agent at least his proportional share up to one item, and more importantly, at least half of his MMS. Moreover, this last ex-post guarantee holds even with respect to a more demanding notion of a share, introduced in this paper, that we refer to as the truncated proportional share (TPS). Our guarantees are nearly best possible, in the sense that one cannot guarantee agents more than their proportional share ex-ante, and one cannot guarantee agents more than a $\frac{n}{2n-1}$ fraction of their TPS ex-post.
In the early $20^{th}$ century, Pigou observed that imposing a marginal cost tax on the usage of a public good induces a socially efficient level of use as an equilibrium. … In the early $20^{th}$ century, Pigou observed that imposing a marginal cost tax on the usage of a public good induces a socially efficient level of use as an equilibrium. Unfortunately, such a "Pigouvian" tax may also induce other, socially inefficient, equilibria. We observe that this social inefficiency may be unbounded, and study whether alternative tax structures may lead to milder losses in the worst case, i.e. to a lower price of anarchy. We show that no tax structure leads to bounded losses in the worst case. However, we do find a tax scheme that has a lower price of anarchy than the Pigouvian tax, obtaining tight lower and upper bounds in terms of a crucial parameter that we identify. We generalize our results to various scenarios that each offers an alternative to the use of a public road by private cars, such as ride sharing, or using a bus or a train.
A prevalent assumption in auction theory is that the auctioneer has full control over the market and that the allocation she dictates is final. In practice, however, agents might be … A prevalent assumption in auction theory is that the auctioneer has full control over the market and that the allocation she dictates is final. In practice, however, agents might be able to resell acquired items in an aftermarket. A prominent example is the market for carbon emission allowances. These allowances are commonly allocated by the government using uniform-price auctions, and firms can typically trade these allowances among themselves in an aftermarket that may not be fully under the auctioneer's control. While the uniform-price auction is approximately efficient in isolation, we show that speculation and resale in aftermarkets might result in a significant welfare loss. Motivated by this issue, we consider three approaches, each ensuring high equilibrium welfare in the combined market. The first approach is to adopt smooth auctions such as discriminatory auctions. This approach is robust to correlated valuations and to participants acquiring information about others' types. However, discriminatory auctions have several downsides, notably that of charging bidders different prices for identical items, resulting in fairness concerns that make the format unpopular. Two other approaches we suggest are either using posted-pricing mechanisms, or using uniform-price auctions with anonymous reserves. We show that when using balanced prices, both these approaches ensure high equilibrium welfare in the combined market. The latter also inherits many of the benefits from uniform-price auctions such as price discovery, and can be introduced with a minor modification to auctions currently in use to sell carbon emission allowances.
We study the classic bilateral trade setting. Myerson and Satterthwaite show that there is no Bayesian incentive compatible and budget-balanced mechanism that obtains the gains from trade of the first-best … We study the classic bilateral trade setting. Myerson and Satterthwaite show that there is no Bayesian incentive compatible and budget-balanced mechanism that obtains the gains from trade of the first-best mechanism. Consider the random-offerer mechanism: with probability $\frac{1}{2}$ run the \emph{seller-offering} mechanism, in which the seller offers the buyer a take-it-or-leave-it price that maximizes the expected profit of the seller, and with probability $\frac{1}{2}$ run the \emph{buyer-offering} mechanism. Very recently, Deng, Mao, Sivan, and Wang showed that the gains from trade of the random-offerer mechanism is at least a constant factor of $\frac 1 {8.23}\approx 0.121$ of the gains from trade of the first best mechanism. Perhaps a natural conjecture is that the gains-from-trade of the random-offerer mechanism, which is known to be at least half of the gains-from-trade of the second-best mechanism, is also at least half of the gains-from-trade of the first-best mechanism. However, in this note we exhibit distributions such as the gains-from trade of the random-offerer mechanism is smaller than a $0.495$-fraction of the gains-from-trade of the first-best mechanism.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell n items to a single unit-demand buyer. In this robust version, the seller … We consider a robust version of the revenue maximization problem, where a single seller wishes to sell n items to a single unit-demand buyer. In this robust version, the seller knows the buyer's marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case [Carroll, 2017], we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than n1/2-ε. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller … We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller knows the buyer's marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case (Carroll, 2017), we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than $n^{1/2-\epsilon}$. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations. To gauge the robustness of our positive results, we also study $\epsilon$-dichotomous valuations, in which the added value of any item to a set is either non-positive, or in the range $[1, 1 + \epsilon]$. We show several impossibility results in this setting, and also a positive result: for players that have additive $\epsilon$-dichotomous valuations with sufficiently small $\epsilon$, we design a randomized truthful mechanism with strong ex-post guarantees. For $\rho = \frac{1}{1 + \epsilon}$, the allocations that it produces generate at least a $\rho$-fraction of the maximum welfare, and enjoy $\rho$-approximations for various fairness properties, such as being envy-free up to one item (EF1), and giving each player at least her maximin share.
Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, … Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, convex) disutility for payments participate in mechanisms designed for quasi-linear agents. We first show that any mechanism that is truthful for quasi-linear buyers has a simple best response function for buyers with non-linear disutility from payments, in which each bidder simply scales down her value for each potential outcome by a fixed factor, equal to her target return on investment (ROI). We call such a strategy ROI-optimal. We prove the existence of a Nash equilibrium in which agents use ROI-optimal strategies for a general class of allocation problems. Motivated by online marketplaces, we then focus on simultaneous second-price auctions for additive bidders and show that all ROI-optimal equilibria in this setting achieve constant-factor approximations to suitable welfare and revenue benchmarks.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller … We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller knows the buyer's marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case (Carroll, 2017), we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than $n^{1/2-\epsilon}$. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations. To gauge the robustness of our positive results, we also study $\epsilon$-dichotomous valuations, in which the added value of any item to a set is either non-positive, or in the range $[1, 1 + \epsilon]$. We show several impossibility results in this setting, and also a positive result: for players that have additive $\epsilon$-dichotomous valuations with sufficiently small $\epsilon$, we design a randomized truthful mechanism with strong ex-post guarantees. For $\rho = \frac{1}{1 + \epsilon}$, the allocations that it produces generate at least a $\rho$-fraction of the maximum welfare, and enjoy $\rho$-approximations for various fairness properties, such as being envy-free up to one item (EF1), and giving each player at least her maximin share.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Bulow-Klemperer-Style Results for Welfare Maximization in Two-Sided MarketsMoshe Babaioff, Kira Goldner, and Yannai A. GonczarowskiMoshe … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Bulow-Klemperer-Style Results for Welfare Maximization in Two-Sided MarketsMoshe Babaioff, Kira Goldner, and Yannai A. GonczarowskiMoshe Babaioff, Kira Goldner, and Yannai A. Gonczarowskipp.2452 - 2471Chapter DOI:https://doi.org/10.1137/1.9781611975994.150PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the problem of welfare (and gains-from-trade) maximization in two-sided markets using simple mechanisms that are prior-independent. The seminal impossibility result of Myerson and Satterthwaite [1983] shows that even for bilateral trade, there is no feasible (individually rational, truthful, and budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and even worse, it is known to be extremely complex, eluding a precise description. In this paper we present Bulow-Klemperer-style results to circumvent these hurdles in double-auction market settings. We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, it is deterministic, truthful, prior-independent, and anonymous). First, in the setting in which the values of the buyers and of the sellers are sampled independently and identically from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal-yet-infeasible VCG mechanism in the original market. We then move to a more general setting in which the values of the buyers are sampled from one distribution, and those of the sellers from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers' distribution. We present both upper bounds and lower bounds on the number of buyers that, when added, guarantees that BTR in the augmented market achieve welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms, and all of our positive and negative results extend to adding sellers instead of buyers. In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization (and more precisely, for gains-from-trade approximation) in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by … We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by the chosen alternative. One such setting is the Shared-Rental problem, in which students jointly rent an apartment and need to decide which bedroom to allocate to each student, depending on the student's preferences. Many solution concepts have been proposed for such settings, ranging from mechanisms without transfers, such as Random Priority and the Eating mechanism, to mechanisms with transfers, such as envy free solutions, the Shapley value, and the Kalai-Smorodinsky bargaining solution. We seek a solution concept that satisfies three natural properties, concerning efficiency, fairness and decomposition. We observe that every solution concept known (to us) fails to satisfy at least one of the three properties. We present a new solution concept, designed so as to satisfy the three properties. A certain submodularity condition (which holds in interesting special cases such as the Shared-Rental setting) implies both existence and uniqueness of our solution concept.
We consider the problem of welfare maximization in two-sided markets using simple mechanisms that are prior-independent. The Myerson-Satterthwaite impossibility theorem shows that even for bilateral trade, there is no feasible … We consider the problem of welfare maximization in two-sided markets using simple mechanisms that are prior-independent. The Myerson-Satterthwaite impossibility theorem shows that even for bilateral trade, there is no feasible (IR, truthful, budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and is extremely complex, eluding a precise description. We present Bulow-Klemperer-style results to circumvent these hurdles in double-auction markets. We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, deterministic, truthful, prior-independent, anonymous). First, in the setting where buyers' and sellers' values are sampled i.i.d. from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal in the original market. We then move to a more general setting where buyers' values are sampled from one distribution and sellers' from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers'. We present bounds on the number of buyers that, when added, guarantees that BTR in the augmented market have welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms, and all of our results extend to adding sellers instead of buyers. In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context.
We study competitive equilibrium in the canonical Fisher market model, but with indivisible goods. In this model, every agent has a budget of artificial currency with which to purchase bundles … We study competitive equilibrium in the canonical Fisher market model, but with indivisible goods. In this model, every agent has a budget of artificial currency with which to purchase bundles of goods. Equilibrium prices match between demand and supply---at such prices, all agents simultaneously get their favorite within-budget bundle, and the market clears. Unfortunately, a competitive equilibrium may not exist when the goods are indivisible, even in extremely simple markets such as two agents with exactly the same budget and a single item. Yet in this example, once the budgets are slightly perturbed---i.e., made generic---a competitive equilibrium is guaranteed to exist. In this paper we explore the extent to which generic budgets can guarantee equilibrium existence (and thus related fairness guarantees) in markets with multiple items. We complement our results in [Babaioff et al., 2019] for additive preferences by exploring the case of general monotone preferences, establishing positive results for small numbers of items and mapping the limits of our approach. We then consider cardinal preferences, define a hierarchy of such preference classes and establish relations among them, and for some classes prove equilibrium existence under generic budgets.
We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by … We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by the chosen alternative. One such setting is the Shared-Rental problem, in which students jointly rent an apartment and need to decide which bedroom to allocate to each student, depending on the student's preferences. Many solution concepts have been proposed for such settings, ranging from mechanisms without transfers, such as Random Priority and the Eating mechanism, to mechanisms with transfers, such as envy free solutions, the Shapley value, and the Kalai-Smorodinsky bargaining solution. We seek a solution concept that satisfies three natural properties, concerning efficiency, fairness and decomposition. We observe that every solution concept known (to us) fails to satisfy at least one of the three properties. We present a new solution concept, designed so as to satisfy the three properties. A certain submodularity condition (which holds in interesting special cases such as the Shared-Rental setting) implies both existence and uniqueness of our solution concept.
We consider the problem of welfare maximization in two-sided markets using simple mechanisms that are prior-independent. The Myerson-Satterthwaite impossibility theorem shows that even for bilateral trade, there is no feasible … We consider the problem of welfare maximization in two-sided markets using simple mechanisms that are prior-independent. The Myerson-Satterthwaite impossibility theorem shows that even for bilateral trade, there is no feasible (IR, truthful, budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and is extremely complex, eluding a precise description. We present Bulow-Klemperer-style results to circumvent these hurdles in double-auction markets. We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, deterministic, truthful, prior-independent, anonymous). First, in the setting where buyers' and sellers' values are sampled i.i.d. from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal in the original market. We then move to a more general setting where buyers' values are sampled from one distribution and sellers' from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers'. We present bounds on the number of buyers that, when added, guarantees that BTR in the augmented market have welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms, and all of our results extend to adding sellers instead of buyers. In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context.
We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a … We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a single additive buyer. We prove that optimal mechanisms are submodular and hence monotone. Furthermore, we show that in the IID case, optimal mechanisms are symmetric. Our characterizations are surprisingly non-trivial, and we show that they fail to extend in several natural ways, e.g. for correlated distributions or more than two items. In particular, this shows that the optimality of symmetric mechanisms does not follow from the symmetry of the IID distribution.
We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its … We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this paper we show how in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we study Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular. We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders -- in which the equilibrium allocation must be efficient -- when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. Our techniques reveal interesting connections between the LP relaxation of combinatorial auctions and local maxima. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.
The seminal impossibility result of Myerson and Satterthwaite (1983) states that for bilateral trade, there is no mechanism that is individually rational (IR), incentive compatible (IC), weakly budget balanced, and … The seminal impossibility result of Myerson and Satterthwaite (1983) states that for bilateral trade, there is no mechanism that is individually rational (IR), incentive compatible (IC), weakly budget balanced, and efficient. This has led follow-up work on two-sided trade settings to weaken the efficiency requirement and consider approximately efficient simple mechanisms, while still demanding the other properties. The current state-of-the-art of such mechanisms for two-sided markets can be categorized as giving one (but not both) of the following two types of approximation guarantees on the gains from trade: a constant ex-ante guarantee, measured with respect to the second-best efficiency benchmark, or an asymptotically optimal ex-post guarantee, measured with respect to the first-best efficiency benchmark. Here the second-best efficiency benchmark refers to the highest gains from trade attainable by any IR, IC and weakly budget balanced mechanism, while the first-best efficiency benchmark refers to the maximum gains from trade (attainable by the VCG mechanism, which is not weakly budget balanced). In this paper, we construct simple mechanisms for double-auction and matching markets that simultaneously achieve both types of guarantees: these are ex-post IR, Bayesian IC, and ex-post weakly budget balanced mechanisms that 1) ex-ante guarantee a constant fraction of the gains from trade of the second-best, and 2) ex-post guarantee a realization-dependent fraction of the gains from trade of the first-best, such that this realization-dependent fraction converges to 1 (full efficiency) as the market grows large.
The literature on "mechanism design from samples," which has flourished in recent years at the interface of economics and computer science, offers a bridge between the classic computer-science approach of … The literature on "mechanism design from samples," which has flourished in recent years at the interface of economics and computer science, offers a bridge between the classic computer-science approach of worst-case analysis (corresponding to "no samples") and the classic economic approach of average-case analysis for a given Bayesian prior (conceptually corresponding to the number of samples tending to infinity). Nonetheless, the two directions studied so far are two extreme and almost diametrically opposed directions: that of asymptotic results where the number of samples grows large, and that where only a single sample is available. In this paper, we take a first step toward understanding the middle ground that bridges these two approaches: that of a fixed number of samples greater than one. In a variety of contexts, we ask what is possibly the most fundamental question in this direction: "are two samples really better than one sample?". We present a few surprising negative results, and complement them with our main result: showing that the worst-case, over all regular distributions, expected-revenue guarantee of the Empirical Revenue Maximization algorithm given two samples is greater than that of this algorithm given one sample. The proof is technically challenging, and provides the first result that shows that some deterministic mechanism constructed using two samples can guarantee more than one half of the optimal revenue.
We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked … We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is … In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of $n$ trials so as to maximize the total payoff of the chosen … In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of $n$ trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions.
It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true for single-parameter domains: … It is widely believed that computing payments needed to induce truthful bidding is somehow harder than simply computing the allocation. We show that the opposite is true for single-parameter domains: creating a randomized truthful mechanism is essentially as easy as a single call to a monotone allocation function. Our main result is a general procedure to take a monotone allocation rule and transform it (via a black-box reduction) into a randomized mechanism that is truthful in expectation and individually rational for every realization. Moreover, the mechanism implements the same outcome as the original allocation rule with probability arbitrarily close to 1, and requires evaluating that allocation rule only once.
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds … We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [14] shows that randomness offers no benefit - the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al.[6] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call the common base value model) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.
In many settings the power of truthful mechanisms is severely bounded. In this paper we use randomization to overcome this problem. In particular, we construct an FPTAS for multi-unit auctions … In many settings the power of truthful mechanisms is severely bounded. In this paper we use randomization to overcome this problem. In particular, we construct an FPTAS for multi-unit auctions that is truthful in expectation, whereas there is evidence that no polynomial-time truthful deterministic mechanism provides an approximation ratio better than 2. We also show for the first time that truthful in expectation polynomial-time mechanisms are \emph{provably} stronger than polynomial-time universally truthful mechanisms. Specifically, we show that there is a setting in which: (1) there is a non-polynomial time truthful mechanism that always outputs the optimal solution, and that (2) no universally truthful randomized mechanism can provide an approximation ratio better than 2 in polynomial time, but (3) an FPTAS that is truthful in expectation exists.
Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter … Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction. We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
In this paper we show that for any mechanism design problem with the objective of maximizing social welfare, the exponential mechanism can be implemented as a truthful mechanism while still … In this paper we show that for any mechanism design problem with the objective of maximizing social welfare, the exponential mechanism can be implemented as a truthful mechanism while still preserving differential privacy. Our instantiation of the exponential mechanism can be interpreted as a generalization of the VCG mechanism in the sense that the VCG mechanism is the extreme case when the privacy parameter goes to infinity. To our knowledge, this is the first general tool for designing mechanisms that are both truthful and differentially private.
We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution D. The seller has "data" about D in the form … We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution D. The seller has "data" about D in the form of m ≥ 1 i.i.d. samples, and the algorithmic challenge is to use these samples to obtain expected revenue as close as possible to what could be achieved with advance knowledge of D.
Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. … Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. We study the efficiency of Bayes-Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. While it is known that the social welfare of every pure Nash equilibrium (NE) constitutes a constant fraction of the optimal social welfare, a pure NE rarely exists, and moreover, the full information assumption is often unrealistic. Therefore, quantifying the welfare loss in Bayes-Nash equilibria is of particular interest. Previous work established a logarithmic bound on the ratio between the social welfare of a BNE and the expected optimal social welfare in both first-price auctions (Hassidim et al., 2011) and second-price auctions (Bhawalkar and Roughgarden, 2011), leaving a large gap between a constant and a logarithmic ratio. We introduce a new proof technique and use it to resolve both of these gaps in a unified way. Specifically, we show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions.
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 … The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand … We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand constraints, appropriately extending Myerson's single-dimensional result [21] to this setting. We also show that every feasible Bayesian auction - including in particular the revenue-optimal one - can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type ti is transformed into a virtual type fi(ti), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black-box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. Our results are computationally efficient for all multidimensional settings where the bidders are additive, or can be efficiently mapped to be additive, albeit the feasibility and demand constraints may still remain arbitrary combinatorial. In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The runtime can be further improved to polynomial in only the number of items and the number of bidders in itemsymmetric settings by making use of techniques from [15].
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a … We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is … We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is on the case where the goods are indivisible and agents have unequal entitlements. This problem is a generalization of the work by Procaccia and Wang (2014) wherein the agents are assumed to be symmetric with respect to their entitlements. Although Procaccia and Wang show an almost fair (constant approximation) allocation exists in their setting, our main result is in sharp contrast to their observation. We show that, in some cases with n agents, no allocation can guarantee better than 1/n approximation of a fair allocation when the entitlements are not necessarily equal. Furthermore, we devise a simple algorithm that ensures a 1/n approximation guarantee.
 Our second result is for a restricted version of the problem where the valuation of every agent for each good is bounded by the total value he wishes to receive in a fair allocation. Although this assumption might seem without loss of generality, we show it enables us to find a 1/2 approximation fair allocation via a greedy algorithm. Finally, we run some experiments on real-world data and show that, in practice, a fair allocation is likely to exist. We also support our experiments by showing positive results for two stochastic variants of the problem, namely stochastic agents and stochastic items.
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. … We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. We show that if their full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy. We show how some proofs from the literature (item bidding auctions, greedy auctions) can be cast as smoothness proofs or be simplified using smoothness. For first price item bidding with fractionally subadditive bidders we actually manage to improve by much the existing result \cite{Hassidim2011a} from 4 to $\frac{e}{e-1}\approx 1.58$. This also shows a very interesting separation between first and second price item bidding since second price item bidding has PoA at least 2 even under complete information. For a larger class of Bayesian Games where the strategy space of a player also changes with his type we are able to show that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound. We show how weighted congestion games actually satisfy this stronger definition of smoothness. This allows us to show that the inefficiency bounds of weighted congestion games known in the literature carry over to incomplete versions where the weights of the players are private information. We also show how an incomplete version of a natural class of monotone valid utility games, called effort market games are universally $(1,1)$-smooth. Hence, we show that incomplete versions of effort market games where the abilities of the players and their budgets are private information has Bayes-Nash PoA at most 2.
We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary … We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone sub modular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness.
Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible … Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood.
We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share , which is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focused on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations, a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee. Furthermore, we initiate the study of approximate maximin fair division under submodular valuations . Specifically, we show that when the valuations of the agents are nonnegative , monotone , and submodular, then a 0.21-approximate maximin fair allocation is guaranteed to exist. In fact, we show that such an allocation can be efficiently found by using a simple round-robin algorithm. A technical contribution of the article is to analyze the performance of this combinatorial algorithm by employing the concept of multilinear extensions .
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, … We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related … We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related to smooth games defined by Roughgarden, that can be thought of as mechanisms that generate approximately market clearing prices. We show that smooth mechanisms result in high quality outcome both in equilibrium and in learning outcomes in the full information setting, as well as in Bayesian equilibrium with uncertainty about participants. Our main result is to show that smooth mechanisms compose well: smoothness locally at each mechanism implies global efficiency.
We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter … We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how … We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
The question of the minimum menu-size for approximate (i.e., up-to-ε) Bayesian revenue maximization when selling two goods to an additive risk-neutral quasilinear buyer was introduced by Hart and Nisan [2013], … The question of the minimum menu-size for approximate (i.e., up-to-ε) Bayesian revenue maximization when selling two goods to an additive risk-neutral quasilinear buyer was introduced by Hart and Nisan [2013], who give an upper bound of O(1/ε4) for this problem. Using the optimal-transport duality framework of Daskalakis, Deckelbaum, and Tzamos [2013, 2015], we derive the first lower bound for this problem — of Ω(1/∜ε), even when the values for the two goods are drawn i.i.d. from "nice" distributions, establishing how to reason about approximately optimal mechanisms via this duality framework. This bound implies, for any fixed number of goods, a tight bound of Θ(log1/ε) on the minimum deterministic communication complexity guaranteed to suffice for running some approximately revenue-maximizing mechanism, thereby completely resolving this problem. As a secondary result, we show that under standard economic assumptions on distributions, the above upper bound of Hart and Nisan [2013] can be strengthened to O(1/ε2).
In many natural settings agents participate in multiple different auctions that are not simultaneous. In such auctions, future opportunities affect strategic considerations of the players. The goal of this paper … In many natural settings agents participate in multiple different auctions that are not simultaneous. In such auctions, future opportunities affect strategic considerations of the players. The goal of this paper is to develop a quantitative understanding of outcomes of such sequential auctions. In earlier work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in sequential auctions. We considered sequential first price auctions in the full information model, where players are aware of all future opportunities, as well as the valuation of all players. In this paper, we study efficiency in sequential auctions in the Bayesian environment, relaxing the informational assumption on the players. We focus on two environments, both studied in the full information model in Paes Leme et al. 2012, matching markets and matroid auctions. In the full information environment, a sequential first price cut auction for matroid settings is efficient. In Bayesian environments this is no longer the case, as we show using a simple example with three players. Our main result is a bound of 3 on the price of anarchy in both matroid auctions and matching markets. To bound the price of anarchy we need to consider possible deviations at an equilibrium. In a sequential Bayesian environment the effect of deviations is more complex than in one-shot games; early bids allow others to infer information about the player's value. We create effective deviations despite the presence of this difficulty by introducing a bluffing technique of independent interest.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
It is well-known that selling different goods in a single bundle can significantly increase revenue, even when the valuations for the goods are independent. However, bundling is no longer profitable … It is well-known that selling different goods in a single bundle can significantly increase revenue, even when the valuations for the goods are independent. However, bundling is no longer profitable if the goods have high production costs. To overcome this challenge, we introduce a new mechanism, Pure Bundling with Disposal for Cost (PBDC), where after buying the bundle, the customer is allowed to return any subset of goods for their production cost. We derive both distribution-dependent and distribution-free guarantees on its profitability, which improve previous techniques. Our distribution-dependent bound suggests that the firm should never price the bundle such that the profit margin is less than 1/3 of the expected welfare, while also showing that PBDC is optimal for a large number of independent goods. Our distribution-free bound suggests that on the distributions where PBDC performs worst, individual sales perform well. Finally, we conduct extensive simulations which confirm that PBDC outperforms other simple pricing schemes overall.
We study the bilateral trade problem: one seller, one buyer and a single, indivisible item for sale. It is well known that there is no fully-efficient and incentive compatible mechanism … We study the bilateral trade problem: one seller, one buyer and a single, indivisible item for sale. It is well known that there is no fully-efficient and incentive compatible mechanism for this problem that maintains a balanced budget. We design simple and robust mechanisms that obtain approximate efficiency with these properties. We show that even minimal use of statistical data can yield good approximation results. Finally, we demonstrate how a mechanism for this simple bilateral-trade problem can be used as a black-box for constructing mechanisms in more general environments.
We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept … We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept for such problems is Walrasian equilibrium (WE), which provides a simple and transparent pricing structure that achieves optimal social welfare. The main weakness of the WE notion is that it exists only in very restrictive cases. To overcome this limitation, we introduce the notion of a combinatorial Walrasian equilibium (CWE), a natural relaxation of WE. The difference between a CWE and a (noncombinatorial) WE is that the seller can package the items into indivisible bundles prior to sale, and the market does not necessarily clear. We show that every valuation profile admits a CWE that obtains at least half the optimal (unconstrained) social welfare. Moreover, we devise a polynomial time algorithm that, given an arbitrary allocation, computes a CWE that achieves at least half its welfare. Thus, the economic problem of finding a CWE with high social welfare reduces to the algorithmic problem of social-welfare approximation. In addition, we show that every valuation profile admits a CWE that extracts a logarithmic fraction of the optimal welfare as revenue. Finally, to motivate the use of bundles, we establish strong lower bounds when the seller is restricted to using item prices only. The strength of our results derives partly from their generality---our results hold for arbitrary valuations that may exhibit complex combinations of substitutes and complements.
In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team … In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team and pay as little as possible. Examples of this setting include shortest-path auctions and vertex-cover auctions. Recently, Karlin, Kempe and Tamir introduced a new definition of for this problem. Informally, the frugality ratio is the of the total payment of a mechanism to a desired payment bound. The captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction. In this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its ratio. We show that the solution quality is with a constant factor of optimal and the is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties. Moreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio. Also, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule. We study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions. We use these new definitions in the proof of our main result for vertex-cover auctions via a boot-strapping technique, which may be of independent interest.
Optimal mechanisms have been provided in quite general multi-item settings [Cai et al. 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the … Optimal mechanisms have been provided in quite general multi-item settings [Cai et al. 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the support along with its associated probability. In the implicit setting, e.g. when the bidders have additive valuations with independent and/or continuous values for the items, these results do not apply, and it was recently shown that exact revenue optimization is intractable, even when there is only one bidder [Daskalakis et al. 2013]. Even for item distributions with special structure, optimal mechanisms have been surprisingly rare [Manelli and Vincent 2006] and the problem is challenging even in the two-item case [Hart and Nisan 2012]. In this paper, we provide a framework for designing optimal mechanisms using optimal transport theory and duality theory. We instantiate our framework to obtain conditions under which only pricing the grand bundle is optimal in multi-item settings (complementing the work of [Manelli and Vincent 2006]), as well as to characterize optimal two-item mechanisms. We use our results to derive closed-form descriptions of the optimal mechanism in several two-item settings, exhibiting also a setting where a continuum of lotteries is necessary for revenue optimization but a closed-form representation of the mechanism can still be found efficiently using our framework.
We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures … We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures all of the most prominent examples of "simple" auctions, including anonymous and non-anonymous item and bundle pricings, with either a single or multiple buyers. The technique we propose is to break the analysis of auctions into two natural pieces. First, one shows that the set of allocation rules have large amounts of structure; second, fixing an allocation on a sample, one shows that the set of auctions agreeing with this allocation on that sample have revenue functions with low dimensionality. Our results effectively imply that whenever it's possible to compute a near-optimal simple auction with a known prior, it is also possible to compute such an auction with an unknown prior (given a polynomial number of samples).
We study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a generic mechanism that allows for approximate optimal implementation of insensitive objective … We study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a generic mechanism that allows for approximate optimal implementation of insensitive objective functions in ex-post Nash equilibrium. If, furthermore, values are private then the same mechanism is strategy proof. We cast our results onto two specific models: pricing and facility location. The mechanism we design is optimal up to an additive factor of the order of magnitude of one over the square root of the number of agents and involves no utility transfers.
We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly … We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that - unlike in the single item case - the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- … For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- work can be applied to any setting which roughly satisfies the following assumptions: (i) the buyer's types must be distributed independently (not necessarily identically), (ii) the objective function must be linearly separable over the set of buyers, and (iii) the supply constraints must be the only constraints involving more than one buyer. Our framework is general in the sense that it makes no explicit assumption about any of the following: (i) the buyer's valuations (e.g., submodular, additive, etc), (ii) The distribution of types for each buyer, and (iii) the other constraints involving individual buyers (e.g., budget constraints, etc). We present two generic ra-buyer mechanisms that use 1- buyer mechanisms as black boxes. Assuming that we have an α-approximate 1-buyer mechanism for each buyer and assuming that no buyer ever needs more than 1/k of all copies of each item for some integer k ≥ 1, then our generic n- buyer mechanisms are γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> · α-approximation of the optimal n-buyer mechanism, in which γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is a constant which is at least 1 - 1/√(k+3). Observe that γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is at least1/2 (for k = 1) and approaches 1 as k increases. As a byproduct of our construction, we improve a generalization of prophet inequalities. Furthermore, as applications of our main theorem, we improve several results from the literature.
We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all ϵ >; 0, we … We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all ϵ >; 0, we obtain a (1 + ϵ)-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">poly(log r)</sup> when sampled from general distributions supported on a set [u <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">min</sub> ,ru <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">min</sub> ]. We also provide an additive PTAS for all bounded distributions. Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all ϵ >; 0, g(1/ϵ) distinct prices suffice to obtain a (1 + ϵ)-factor approximation to the optimal revenue for MHR distributions, where g(1/ϵ) is a quasi-linear function of 1/ϵ that does not depend on the number of items. Similarly, for all ϵ >; 0 and n >; 0, g(1/ϵ · log n) distinct prices suffice for regular distributions, where n is the number of items and g(·) is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of 1/ϵ, a single price suffices to achieve a (1 + ϵ)-factor approximation. Our results represent significant progress to the single-bidder case of the multidimensional optimal mechanism design problem, following Myerson's celebrated work on optimal mechanism design [Myerson 1981].
This Gödel Prize-winning work traces the steps toward modeling real data. This Gödel Prize-winning work traces the steps toward modeling real data.
This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication. This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication.
As the rapid expansion of smart phones and associated data-intensive applications continues, we expect to see renewed interest in dynamic prioritization schemes as a way to increase the total utility … As the rapid expansion of smart phones and associated data-intensive applications continues, we expect to see renewed interest in dynamic prioritization schemes as a way to increase the total utility of a heterogeneous user base, with each user experiencing variable demand and value for access. We adapt a recent sampled-based mechanism for resource allocation to this setting, which is more effective in aligning incentives in a setting with variable demand than an earlier method for pricing network resources due to Varian and Mackie-Mason (1994). Complementing our theoretical analysis, which also considers incentives on the sell-side of the market, we present the results of a simulation study, confirming the effectiveness of our protocol in aligning incentives and boosting welfare.
ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES Get access WILLIAM R THOMPSON WILLIAM R THOMPSON From the Department of Pathology, … ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES Get access WILLIAM R THOMPSON WILLIAM R THOMPSON From the Department of Pathology, Yale University Search for other works by this author on: Oxford Academic Google Scholar Biometrika, Volume 25, Issue 3-4, December 1933, Pages 285–294, https://doi.org/10.1093/biomet/25.3-4.285 Published: 01 December 1933