Author Description

Login to generate an author description

Ask a Question About This Mathematician

Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. … Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. We study the efficiency of Bayes-Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. While it is known that the social welfare of every pure Nash equilibrium (NE) constitutes a constant fraction of the optimal social welfare, a pure NE rarely exists, and moreover, the full information assumption is often unrealistic. Therefore, quantifying the welfare loss in Bayes-Nash equilibria is of particular interest. Previous work established a logarithmic bound on the ratio between the social welfare of a BNE and the expected optimal social welfare in both first-price auctions (Hassidim et al., 2011) and second-price auctions (Bhawalkar and Roughgarden, 2011), leaving a large gap between a constant and a logarithmic ratio. We introduce a new proof technique and use it to resolve both of these gaps in a unified way. Specifically, we show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions.
We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; … We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; however, the mechanism may additionally choose to bundle unconstrained attributes such as payments or add-ons with the service. An agent's preference over service and attributes is given by her private type and may be multi-dimensional and non-linear. A single-agent problem is to optimizes a menu to offer an agent subject to constraints on the probabilities with which each of the agent's types is served. We give computationally tractable reductions from multi-agent auction problems to these single-agent problems. Our discussion focuses on maximizing revenue, but our results can be applied to other objectives (e.g., welfare).
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists … We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+ε)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5/3 approximation to the revenue achievable by an optimal deterministic truthful mechanism.
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 … The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves (n-1)/n of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from 0.5 to 0.512 of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the … We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over … We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over … We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication. This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication.
We study combinatorial auctions where each item is sold separately but simultaneously via a second price auction. We ask whether it is possible to efficiently compute in this game a … We study combinatorial auctions where each item is sold separately but simultaneously via a second price auction. We ask whether it is possible to efficiently compute in this game a pure Nash equilibrium with social welfare close to the optimal one. We show that when the valuations of the bidders are submodular, in many interesting settings (e.g., constant number of bidders, budget additive bidders) computing an equilibrium with good welfare is essentially as easy as computing, completely ignoring incentives issues, an allocation with good welfare. On the other hand, for subadditive valuations, we show that computing an equilibrium requires exponential communication. Finally, for XOS (a.k.a. fractionally subadditive) valuations, we show that if there exists an efficient algorithm that finds an equilibrium, it must use techniques that are very different from the ones currently known.
Crémer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We … Crémer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We study whether this phenomenon persists when the auctioneer has only incomplete knowledge of the distribution, represented by a finite family of candidate distributions, and has sample access to the real distribution. We show that the naive approach which uses samples to distinguish candidate distributions may fail, whereas an extended version of the Crémer-McLean auction simultaneously extracts full social surplus under each candidate distribution. With an algebraic argument, we give a tight bound on the number of samples needed by this auction, which is the difference between the number of candidate distributions and the dimension of the linear space they span.
We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is … We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer and if the value distribution conditioning on each signal is regular, the gap between the two is bounded by a multiplicative factor of 3. We give examples showing that both conditions are necessary for a constant bound on the gap to hold.We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if the distribution is jointly regular. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We … Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We study whether this phenomenon persists when the auctioneer has only incomplete knowledge of the distribution, represented by a finite family of candidate distributions, and has sample access to the real distribution. We show that the naive approach which uses samples to distinguish candidate distributions may fail, whereas an extended version of the Cr\'emer-McLean auction simultaneously extracts full social surplus under each candidate distribution. With an algebraic argument, we give a tight bound on the number of samples needed by this auction, which is the difference between the number of candidate distributions and the dimension of the linear space they span.
Bulow and Klemperer's well-known result states that, in a single-item auction where the n bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one … Bulow and Klemperer's well-known result states that, in a single-item auction where the n bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one additional bidder (a duplicate) extracts at least as much revenue as the optimal auction without the duplicate. Hartline and Roughgarden, in their influential 2009 paper, removed the requirement that the distributions be identical, at the cost of allowing the Vickrey auction to recruit n duplicates, one from each distribution, and relaxing its revenue advantage to a 2-approximation. In this work we restore Bulow and Klemperer's number of duplicates in Hartline and Roughgarden's more general setting. We show that recruiting a duplicate from one of the distributions suffices for the Vickrey auction to $10$-approximate the optimal revenue. We also show that in a k-unit auction, recruiting k duplicates suffices for the VCG auction to $O(1)$-approximate the optimal revenue. We also tighten the analysis for Hartline and Roughgarden's Vickrey auction with n duplicates. We show that, for two distributions, the Vickrey auction with two duplicates obtains at least $3/4$ of the optimal revenue. This is tight by meeting a lower bound by Hartline and Roughgarden. En route, we obtain a transparent analysis of their $2$-approximation, by a natural connection to Ronen's lookahead auction.
If a two-player social welfare maximization problem does not admit a PTAS, we prove that any maximal-in-range truthful mechanism that runs in polynomial time cannot achieve an approximation factor better … If a two-player social welfare maximization problem does not admit a PTAS, we prove that any maximal-in-range truthful mechanism that runs in polynomial time cannot achieve an approximation factor better than 1/2. Moreover, for the k-player version of the same problem, the hardness of approximation improves to 1/k under the same two-player hardness assumption. (We note that 1/k is achievable by a trivial deterministic maximal-in-range mechanism.) This hardness result encompasses not only deterministic maximal-in-range mechanisms, but also all universally-truthful randomized maximal in range algorithms, as well as a class of strictly more powerful truthful-in-expectation randomized mechanisms recently introduced by Dobzinski and Dughmi. Our result applies to any class of valuation functions that satisfies some minimal closure properties. These properties are satisfied by the valuation functions in all well-studied APX-hard social welfare maximization problems, such as coverage, submodular, and subadditive valuations. We also prove a stronger result for universally-truthful maximal-in-range mechanisms. Namely, even for the class of budgeted additive valuations, which admits an FPTAS, no such mechanism can achieve an approximation factor better than 1/k in polynomial time.
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the … We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.
Platforms: How Much and What Should They Control? Platforms exert control in many ways: from determining which buyers are shown to which sellers to directly controlling allocation of aggregate supply … Platforms: How Much and What Should They Control? Platforms exert control in many ways: from determining which buyers are shown to which sellers to directly controlling allocation of aggregate supply across different consumer markets. How should platforms exercise control to increase social welfare? In “Transparency and Control in Platforms for Networked Markets,” authors J. Pang, W. Lin, H. Fu, J. Kleeman, E. Bitar, and A. Wierman examine the trade-offs that emerge between efficiency loss, transparency, and control across different platform designs. They show that open access platforms incentivize increased production toward perfectly competitive levels and limit efficiency loss, whereas controlled allocation designs can lead to supply-side withholding, resulting in unbounded efficiency loss. Balancing transparency and control, discriminatory access designs strictly improve upon the worst-case efficiency loss of both open access and controlled allocation platforms. Besides providing insights into the design and operation of two-sided platforms, this paper contributes to the growing theory of Cournot competition in networked markets.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough'' --- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $łog łog$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, … We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, a group of participants compete to receive service from a mechanism that can provide such services at a cost. The mechanism wishes to choose which agents to serve in order to maximize social efficiency, but is not willing to suffer an expected loss: the agents' payments should cover the cost of service in expectation.
Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose … Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose testing complexity is unknown. Three Boolean functions $f_1$, $f_2$ and $f_3: \mathbb{F}_2^k \to \{0, 1\}$ are said to be triangle free if there is no $x, y \in \mathbb{F}_2^k$ such that $f_1(x) = f_2(y) = f_3(x + y) = 1$. This property is known to be strongly testable (Green 2005), but the number of queries needed is upper-bounded only by a tower of twos whose height is polynomial in $1 / \epsislon$, where $\epsislon$ is the distance between the tested function triple and triangle-freeness, i.e., the minimum fraction of function values that need to be modified to make the triple triangle free. A lower bound of $(1 / ε)^{2.423}$ for any one-sided tester was given by Bhattacharyya and Xie (2010). In this work we improve this bound to $(1 / ε)^{6.619}$. Interestingly, we prove this by way of a combinatorial construction called \emph{uniquely solvable puzzles} that was at the heart of Coppersmith and Winograd's renowned matrix multiplication algorithm.
We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, … We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, a group of participants compete to receive service from a mechanism that can provide such services at a cost. The mechanism wishes to choose which agents to serve in order to maximize social efficiency, but is not willing to suffer an expected loss: the agents' payments should cover the cost of service in expectation.
Peer grading systems make large courses more scalable, provide students with faster and more detailed feedback, and help students to learn by thinking critically about the work of others. A … Peer grading systems make large courses more scalable, provide students with faster and more detailed feedback, and help students to learn by thinking critically about the work of others. A key obstacle to the broader adoption of peer grading systems is motivating students to provide accurate grades. The literature has explored many different approaches to incentivizing accurate grading (which we survey in detail), but the strongest incentive guarantees have been offered by mechanisms that compare peer grades to trusted TA grades with a fixed probability. In this work, we show that less TA work is required when these probabilities are allowed to depend on the grades that students report. We prove this result in a model with two possible grades, arbitrary numbers of agents, no requirement that students grade multiple assignments, arbitrary but homogeneous noisy observation of the ground truth which students can pay a fixed cost to observe, and the possibility of misreporting grades before or after observing this signal. We give necessary and sufficient conditions for our new mechanism's feasibility, prove its optimality under these assumptions, and characterize its improvement over the previous state of the art both analytically and empirically. Finally, we relax our homogeneity assumption, allowing each student and TA to observe the ground truth according to a different noise model.
We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is … We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer, the gap between the two is bounded by a multiplicative factor of 3, if the value distribution conditioning on each signal is regular. We give examples showing that both conditions are necessary for a constant bound to hold. We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if each bidder's value conditioning on the others' is drawn according to a regular distribution. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
Previous chapter Next chapter Full AccessProceedings Symposium on Simplicity in Algorithms (SOSA)Oblivious Online Contention Resolution SchemesHu Fu, Pinyan Lu, Zhihao Gavin Tang, Abner Turkieltaub, Hongxun Wu, Jinzhao Wu, and Qianfan … Previous chapter Next chapter Full AccessProceedings Symposium on Simplicity in Algorithms (SOSA)Oblivious Online Contention Resolution SchemesHu Fu, Pinyan Lu, Zhihao Gavin Tang, Abner Turkieltaub, Hongxun Wu, Jinzhao Wu, and Qianfan ZhangHu Fu, Pinyan Lu, Zhihao Gavin Tang, Abner Turkieltaub, Hongxun Wu, Jinzhao Wu, and Qianfan Zhangpp.268 - 278Chapter DOI:https://doi.org/10.1137/1.9781611977066.20PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Contention resolution schemes (CRSs) are powerful tools for obtaining "ex post feasible" solutions from candidates that are drawn from "ex ante feasible" distributions. Online contention resolution schemes (OCRSs), the online version, have found myriad applications in Bayesian and stochastic problems, such as prophet inequalities and stochastic probing. When the ex ante distribution is unknown, it was unknown whether good CRSs/OCRSs exist with no sample (in which case the scheme is oblivious) or few samples from the distribution. In this work, we give a simple -selectable oblivious single item OCRS by mixing two simple schemes evenly, and show, via a Ramsey theory argument, that it is optimal. On the negative side, we show that no CRS or OCRS with O(1) samples can be Ω(1)-balanced/selectable (i.e., preserve every active candidate with a constant probability) for graphic or transversal matroids. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-706-6 https://doi.org/10.1137/1.9781611977066Book Series Name:ProceedingsBook Code:SOSA22Book Pages:v + 320
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough" -- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $\log \log$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
Simultaneous item auctions are simple procedures for allocating items to bidders with potentially complex preferences over different item sets. In a simultaneous auction, every bidder submits bids on all items … Simultaneous item auctions are simple procedures for allocating items to bidders with potentially complex preferences over different item sets. In a simultaneous auction, every bidder submits bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. Such procedures occur in practice (e.g. eBay) but are not truthful. We study the efficiency of Bayesian Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. We show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions. These results improve upon the previously-known logarithmic bounds, which were established by [Hassidim, Kaplan, Mansour and Nisan '11] for first-price auctions and by [Bhawalkar and Roughgarden '11] for second-price auctions.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough'' -- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $łog łog$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
Bulow and Klemperer's well-known result states that, in a single-item auction where the $n$ bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one … Bulow and Klemperer's well-known result states that, in a single-item auction where the $n$ bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one additional bidder (a duplicate) extracts at least as much revenue as the optimal auction without the duplicate. Hartline and Roughgarden, in their influential 2009 paper, removed the requirement that the distributions be identical, at the cost of allowing the Vickrey auction to recruit $n$ duplicates, one from each distribution, and relaxing its revenue advantage to a $2$-approximation. In this work we restore Bulow and Klemperer's number of duplicates in Hartline and Roughgarden's more general setting with a worse approximation ratio. We show that recruiting a duplicate from one of the distributions suffices for the Vickrey auction to $10$-approximate the optimal revenue. We also show that in a $k$-items unit demand auction, recruiting $k$ duplicates suffices for the VCG auction to $O(1)$-approximate the optimal revenue. As another result, we tighten the analysis for Hartline and Roughgarden's Vickrey auction with $n$ duplicates for the case with two bidders in the auction. We show that in this case the Vickrey auction with two duplicates obtains at least $3/4$ of the optimal revenue. This is tight by meeting a lower bound by Hartline and Roughgarden. En route, we obtain a transparent analysis of their $2$-approximation for $n$~bidders, via a natural connection to Ronen's lookahead auction.
Let R be a ring.A monomorphism f:R→R is a power endomorphism of f:x→xn for some n1.In this paper,we will completely characterize all power endomorphisms of rings without zero divisor. Let R be a ring.A monomorphism f:R→R is a power endomorphism of f:x→xn for some n1.In this paper,we will completely characterize all power endomorphisms of rings without zero divisor.
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists … We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a $(\frac 3 2+\epsilon)$-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the $\frac 5 3$-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.
In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on … In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set T of discrete types can be extended to the convex hull of T.Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes.
In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on … In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set $T$ of discrete types can be extended to the convex hull of $T$. Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes.
Motivated by Generative Adversarial Networks, we study the computation of Nash equilibrium in concave network zero-sum games (NZSGs), a multiplayer generalization of two-player zero-sum games first proposed with linear payoffs. … Motivated by Generative Adversarial Networks, we study the computation of Nash equilibrium in concave network zero-sum games (NZSGs), a multiplayer generalization of two-player zero-sum games first proposed with linear payoffs. Extending previous results, we show that various game theoretic properties of convex-concave two-player zero-sum games are preserved in this generalization. We then generalize last iterate convergence results obtained previously in two-player zero-sum games. We analyze convergence rates when players update their strategies using Gradient Ascent, and its variant, Optimistic Gradient Ascent, showing last iterate convergence in three settings -- when the payoffs of players are linear, strongly concave and Lipschitz, and strongly concave and smooth. We provide experimental results that support these theoretical findings.
In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria … In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior. Using the First Price Auction as our main demonstrating example, we show that $\tilde O(n / \epsilon^2)$ samples from the prior with $n$ agents suffice for an algorithm to learn the interim utilities for all monotone bidding strategies. As a consequence, this number of samples suffice for learning all approximate equilibria. We give almost matching (up to polylog factors) lower bound on the sample complexity for learning utilities. We also consider a setting where agents must pay a search cost to discover their own types. Drawing on a connection between this setting and the first price auction, discovered recently by Kleinberg et al. (2016), we show that $\tilde O(n / \epsilon^2)$ samples suffice for utilities and equilibria to be estimated in a near welfare-optimal descending auction in this setting. En route, we improve the sample complexity bound, recently obtained by Guo et al. (2020), for the Pandora's Box problem, which is a classical model for sequential consumer search.
In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria … In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior. Using the First Price Auction as our main demonstrating example, we show that $\tilde O(n / \epsilon^2)$ samples from the prior with $n$ agents suffice for an algorithm to learn the interim utilities for all monotone bidding strategies. As a consequence, this number of samples suffice for learning all approximate equilibria. We give almost matching (up to polylog factors) lower bound on the sample complexity for learning utilities. We also consider a setting where agents must pay a search cost to discover their own types. Drawing on a connection between this setting and the first price auction, discovered recently by Kleinberg et al. (2016), we show that $\tilde O(n / \epsilon^2)$ samples suffice for utilities and equilibria to be estimated in a near welfare-optimal descending auction in this setting. En route, we improve the sample complexity bound, recently obtained by Guo et al. (2021), for the Pandora's Box problem, which is a classical model for sequential consumer search.
Contention resolution schemes (CRSs) are powerful tools for obtaining "ex post feasible" solutions from candidates that are drawn from "ex ante feasible" distributions. Online contention resolution schemes (OCRSs), the online … Contention resolution schemes (CRSs) are powerful tools for obtaining "ex post feasible" solutions from candidates that are drawn from "ex ante feasible" distributions. Online contention resolution schemes (OCRSs), the online version, have found myriad applications in Bayesian and stochastic problems, such as prophet inequalities and stochastic probing. When the ex ante distribution is unknown, it was unknown whether good CRSs/OCRSs exist with no sample (in which case the scheme is oblivious) or few samples from the distribution. In this work, we give a simple $\frac{1}{e}$-selectable oblivious single item OCRS by mixing two simple schemes evenly, and show, via a Ramsey theory argument, that it is optimal. On the negative side, we show that no CRS or OCRS with $O(1)$ samples can be $\Omega(1)$-balanced/selectable (i.e., preserve every active candidate with a constant probability) for graphic or transversal matroids.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough" -they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multiitem auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms-a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group-still cannot achieve within a log log factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in … Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in large part to a shift in monetization strategies: rather than charging players an upfront cost ("pay-to-play"), games often request optional microtransactions throughout gameplay ("free-to-play"). We focus on a common scenario in which games include wait times -- gating either items or game progression -- that players can pay to skip. Game designers typically say that they optimize for player happiness rather than revenue; however, prices for skips are typically set at levels that few players are willing to pay, leading to low purchase rates. Under a traditional analysis, it would seem that game designers fail at their stated goal if few players buy what they are selling. We argue that an alternate model can better explain this dynamic: players value tasks more highly as they are perceived to be more difficult. While skips can increase players' utilities by providing instant gratification, pricing skips too cheaply can lower players' utilities by decreasing the perceived amount of work needed to complete a task. We show that high revenue, high player utility, and low purchase rates can all coexist under this model, particularly under a realistic distribution of players having few buyers but a few big-spending "whales." We also investigate how a game designer should optimize prices under our model.
Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in … Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in large part to a shift in monetization strategies: rather than charging players an upfront cost ("pay-to-play"), games often request optional microtransactions throughout gameplay ("free-to-play"). We focus on a common scenario in which games include wait times---gating either items or game progression---that players can pay to skip. Game designers typically say that they optimize for player happiness rather than revenue; however, prices for skips are typically set at levels that few players are willing to pay, leading to low purchase rates. Under a traditional analysis, it would seem that game designers fail at their stated goal if few players buy what they are selling. We argue that an alternate model can better explain this dynamic: players value tasks more highly as they are perceived to be more difficult. While skips can increase players' utilities by providing instant gratification, pricing skips too cheaply can lower players' utilities by decreasing the perceived amount of work needed to complete a task. We show that high revenue, high player utility, and low purchase rates can all coexist under this model, particularly under a realistic distribution of players having few buyers but a few big-spending "whales." We also investigate how a game designer should optimize prices under our model. An appendix of the paper with proofs, more comprehensive results and visualizations can be found at https://arxiv.org/abs/2312.10205.
We study matroid prophet inequalities when distributions are unknown and accessible only through samples. While single-sample prophet inequalities for special matroids are known, no constant-factor competitive algorithm with even a … We study matroid prophet inequalities when distributions are unknown and accessible only through samples. While single-sample prophet inequalities for special matroids are known, no constant-factor competitive algorithm with even a sublinear number of samples was known for general matroids. Adding more to the stake, the single-sample version of the question for general matroids has close (two-way) connections with the long-standing matroid secretary conjecture. In this work, we give a $(\frac14 - \varepsilon)$-competitive matroid prophet inequality with only $O_\varepsilon(\mathrm{poly} \log n)$ samples. Our algorithm consists of two parts: (i) a novel quantile-based reduction from matroid prophet inequalities to online contention resolution schemes (OCRSs) with $O_\varepsilon(\log n)$ samples, and (ii) a $(\frac14 - \varepsilon)$-selectable matroid OCRS with $O_\varepsilon(\mathrm{poly} \log n)$ samples which carefully addresses an adaptivity challenge.
We study matroid prophet inequalities when distributions are unknown and accessible only through samples. While single-sample prophet inequalities for special matroids are known, no constant-factor competitive algorithm with even a … We study matroid prophet inequalities when distributions are unknown and accessible only through samples. While single-sample prophet inequalities for special matroids are known, no constant-factor competitive algorithm with even a sublinear number of samples was known for general matroids. Adding more to the stake, the single-sample version of the question for general matroids has close (two-way) connections with the long-standing matroid secretary conjecture. In this work, we give a $(\frac14 - \varepsilon)$-competitive matroid prophet inequality with only $O_\varepsilon(\mathrm{poly} \log n)$ samples. Our algorithm consists of two parts: (i) a novel quantile-based reduction from matroid prophet inequalities to online contention resolution schemes (OCRSs) with $O_\varepsilon(\log n)$ samples, and (ii) a $(\frac14 - \varepsilon)$-selectable matroid OCRS with $O_\varepsilon(\mathrm{poly} \log n)$ samples which carefully addresses an adaptivity challenge.
Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in … Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in large part to a shift in monetization strategies: rather than charging players an upfront cost ("pay-to-play"), games often request optional microtransactions throughout gameplay ("free-to-play"). We focus on a common scenario in which games include wait times---gating either items or game progression---that players can pay to skip. Game designers typically say that they optimize for player happiness rather than revenue; however, prices for skips are typically set at levels that few players are willing to pay, leading to low purchase rates. Under a traditional analysis, it would seem that game designers fail at their stated goal if few players buy what they are selling. We argue that an alternate model can better explain this dynamic: players value tasks more highly as they are perceived to be more difficult. While skips can increase players' utilities by providing instant gratification, pricing skips too cheaply can lower players' utilities by decreasing the perceived amount of work needed to complete a task. We show that high revenue, high player utility, and low purchase rates can all coexist under this model, particularly under a realistic distribution of players having few buyers but a few big-spending "whales." We also investigate how a game designer should optimize prices under our model. An appendix of the paper with proofs, more comprehensive results and visualizations can be found at https://arxiv.org/abs/2312.10205.
Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in … Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in large part to a shift in monetization strategies: rather than charging players an upfront cost ("pay-to-play"), games often request optional microtransactions throughout gameplay ("free-to-play"). We focus on a common scenario in which games include wait times -- gating either items or game progression -- that players can pay to skip. Game designers typically say that they optimize for player happiness rather than revenue; however, prices for skips are typically set at levels that few players are willing to pay, leading to low purchase rates. Under a traditional analysis, it would seem that game designers fail at their stated goal if few players buy what they are selling. We argue that an alternate model can better explain this dynamic: players value tasks more highly as they are perceived to be more difficult. While skips can increase players' utilities by providing instant gratification, pricing skips too cheaply can lower players' utilities by decreasing the perceived amount of work needed to complete a task. We show that high revenue, high player utility, and low purchase rates can all coexist under this model, particularly under a realistic distribution of players having few buyers but a few big-spending "whales." We also investigate how a game designer should optimize prices under our model.
Platforms: How Much and What Should They Control? Platforms exert control in many ways: from determining which buyers are shown to which sellers to directly controlling allocation of aggregate supply … Platforms: How Much and What Should They Control? Platforms exert control in many ways: from determining which buyers are shown to which sellers to directly controlling allocation of aggregate supply across different consumer markets. How should platforms exercise control to increase social welfare? In “Transparency and Control in Platforms for Networked Markets,” authors J. Pang, W. Lin, H. Fu, J. Kleeman, E. Bitar, and A. Wierman examine the trade-offs that emerge between efficiency loss, transparency, and control across different platform designs. They show that open access platforms incentivize increased production toward perfectly competitive levels and limit efficiency loss, whereas controlled allocation designs can lead to supply-side withholding, resulting in unbounded efficiency loss. Balancing transparency and control, discriminatory access designs strictly improve upon the worst-case efficiency loss of both open access and controlled allocation platforms. Besides providing insights into the design and operation of two-sided platforms, this paper contributes to the growing theory of Cournot competition in networked markets.
Previous chapter Next chapter Full AccessProceedings Symposium on Simplicity in Algorithms (SOSA)Oblivious Online Contention Resolution SchemesHu Fu, Pinyan Lu, Zhihao Gavin Tang, Abner Turkieltaub, Hongxun Wu, Jinzhao Wu, and Qianfan … Previous chapter Next chapter Full AccessProceedings Symposium on Simplicity in Algorithms (SOSA)Oblivious Online Contention Resolution SchemesHu Fu, Pinyan Lu, Zhihao Gavin Tang, Abner Turkieltaub, Hongxun Wu, Jinzhao Wu, and Qianfan ZhangHu Fu, Pinyan Lu, Zhihao Gavin Tang, Abner Turkieltaub, Hongxun Wu, Jinzhao Wu, and Qianfan Zhangpp.268 - 278Chapter DOI:https://doi.org/10.1137/1.9781611977066.20PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Contention resolution schemes (CRSs) are powerful tools for obtaining "ex post feasible" solutions from candidates that are drawn from "ex ante feasible" distributions. Online contention resolution schemes (OCRSs), the online version, have found myriad applications in Bayesian and stochastic problems, such as prophet inequalities and stochastic probing. When the ex ante distribution is unknown, it was unknown whether good CRSs/OCRSs exist with no sample (in which case the scheme is oblivious) or few samples from the distribution. In this work, we give a simple -selectable oblivious single item OCRS by mixing two simple schemes evenly, and show, via a Ramsey theory argument, that it is optimal. On the negative side, we show that no CRS or OCRS with O(1) samples can be Ω(1)-balanced/selectable (i.e., preserve every active candidate with a constant probability) for graphic or transversal matroids. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-706-6 https://doi.org/10.1137/1.9781611977066Book Series Name:ProceedingsBook Code:SOSA22Book Pages:v + 320
Contention resolution schemes (CRSs) are powerful tools for obtaining "ex post feasible" solutions from candidates that are drawn from "ex ante feasible" distributions. Online contention resolution schemes (OCRSs), the online … Contention resolution schemes (CRSs) are powerful tools for obtaining "ex post feasible" solutions from candidates that are drawn from "ex ante feasible" distributions. Online contention resolution schemes (OCRSs), the online version, have found myriad applications in Bayesian and stochastic problems, such as prophet inequalities and stochastic probing. When the ex ante distribution is unknown, it was unknown whether good CRSs/OCRSs exist with no sample (in which case the scheme is oblivious) or few samples from the distribution. In this work, we give a simple $\frac{1}{e}$-selectable oblivious single item OCRS by mixing two simple schemes evenly, and show, via a Ramsey theory argument, that it is optimal. On the negative side, we show that no CRS or OCRS with $O(1)$ samples can be $\Omega(1)$-balanced/selectable (i.e., preserve every active candidate with a constant probability) for graphic or transversal matroids.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough" -they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multiitem auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms-a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group-still cannot achieve within a log log factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria … In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior. Using the First Price Auction as our main demonstrating example, we show that $\tilde O(n / \epsilon^2)$ samples from the prior with $n$ agents suffice for an algorithm to learn the interim utilities for all monotone bidding strategies. As a consequence, this number of samples suffice for learning all approximate equilibria. We give almost matching (up to polylog factors) lower bound on the sample complexity for learning utilities. We also consider a setting where agents must pay a search cost to discover their own types. Drawing on a connection between this setting and the first price auction, discovered recently by Kleinberg et al. (2016), we show that $\tilde O(n / \epsilon^2)$ samples suffice for utilities and equilibria to be estimated in a near welfare-optimal descending auction in this setting. En route, we improve the sample complexity bound, recently obtained by Guo et al. (2020), for the Pandora's Box problem, which is a classical model for sequential consumer search.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough'' --- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $łog łog$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough'' -- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $łog łog$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on … In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set T of discrete types can be extended to the convex hull of T.Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes.
In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on … In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set $T$ of discrete types can be extended to the convex hull of $T$. Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes.
Motivated by Generative Adversarial Networks, we study the computation of Nash equilibrium in concave network zero-sum games (NZSGs), a multiplayer generalization of two-player zero-sum games first proposed with linear payoffs. … Motivated by Generative Adversarial Networks, we study the computation of Nash equilibrium in concave network zero-sum games (NZSGs), a multiplayer generalization of two-player zero-sum games first proposed with linear payoffs. Extending previous results, we show that various game theoretic properties of convex-concave two-player zero-sum games are preserved in this generalization. We then generalize last iterate convergence results obtained previously in two-player zero-sum games. We analyze convergence rates when players update their strategies using Gradient Ascent, and its variant, Optimistic Gradient Ascent, showing last iterate convergence in three settings -- when the payoffs of players are linear, strongly concave and Lipschitz, and strongly concave and smooth. We provide experimental results that support these theoretical findings.
In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria … In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior. Using the First Price Auction as our main demonstrating example, we show that $\tilde O(n / \epsilon^2)$ samples from the prior with $n$ agents suffice for an algorithm to learn the interim utilities for all monotone bidding strategies. As a consequence, this number of samples suffice for learning all approximate equilibria. We give almost matching (up to polylog factors) lower bound on the sample complexity for learning utilities. We also consider a setting where agents must pay a search cost to discover their own types. Drawing on a connection between this setting and the first price auction, discovered recently by Kleinberg et al. (2016), we show that $\tilde O(n / \epsilon^2)$ samples suffice for utilities and equilibria to be estimated in a near welfare-optimal descending auction in this setting. En route, we improve the sample complexity bound, recently obtained by Guo et al. (2021), for the Pandora's Box problem, which is a classical model for sequential consumer search.
Bulow and Klemperer's well-known result states that, in a single-item auction where the n bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one … Bulow and Klemperer's well-known result states that, in a single-item auction where the n bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one additional bidder (a duplicate) extracts at least as much revenue as the optimal auction without the duplicate. Hartline and Roughgarden, in their influential 2009 paper, removed the requirement that the distributions be identical, at the cost of allowing the Vickrey auction to recruit n duplicates, one from each distribution, and relaxing its revenue advantage to a 2-approximation. In this work we restore Bulow and Klemperer's number of duplicates in Hartline and Roughgarden's more general setting. We show that recruiting a duplicate from one of the distributions suffices for the Vickrey auction to $10$-approximate the optimal revenue. We also show that in a k-unit auction, recruiting k duplicates suffices for the VCG auction to $O(1)$-approximate the optimal revenue. We also tighten the analysis for Hartline and Roughgarden's Vickrey auction with n duplicates. We show that, for two distributions, the Vickrey auction with two duplicates obtains at least $3/4$ of the optimal revenue. This is tight by meeting a lower bound by Hartline and Roughgarden. En route, we obtain a transparent analysis of their $2$-approximation, by a natural connection to Ronen's lookahead auction.
Bulow and Klemperer's well-known result states that, in a single-item auction where the $n$ bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one … Bulow and Klemperer's well-known result states that, in a single-item auction where the $n$ bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one additional bidder (a duplicate) extracts at least as much revenue as the optimal auction without the duplicate. Hartline and Roughgarden, in their influential 2009 paper, removed the requirement that the distributions be identical, at the cost of allowing the Vickrey auction to recruit $n$ duplicates, one from each distribution, and relaxing its revenue advantage to a $2$-approximation. In this work we restore Bulow and Klemperer's number of duplicates in Hartline and Roughgarden's more general setting with a worse approximation ratio. We show that recruiting a duplicate from one of the distributions suffices for the Vickrey auction to $10$-approximate the optimal revenue. We also show that in a $k$-items unit demand auction, recruiting $k$ duplicates suffices for the VCG auction to $O(1)$-approximate the optimal revenue. As another result, we tighten the analysis for Hartline and Roughgarden's Vickrey auction with $n$ duplicates for the case with two bidders in the auction. We show that in this case the Vickrey auction with two duplicates obtains at least $3/4$ of the optimal revenue. This is tight by meeting a lower bound by Hartline and Roughgarden. En route, we obtain a transparent analysis of their $2$-approximation for $n$~bidders, via a natural connection to Ronen's lookahead auction.
Peer grading systems make large courses more scalable, provide students with faster and more detailed feedback, and help students to learn by thinking critically about the work of others. A … Peer grading systems make large courses more scalable, provide students with faster and more detailed feedback, and help students to learn by thinking critically about the work of others. A key obstacle to the broader adoption of peer grading systems is motivating students to provide accurate grades. The literature has explored many different approaches to incentivizing accurate grading (which we survey in detail), but the strongest incentive guarantees have been offered by mechanisms that compare peer grades to trusted TA grades with a fixed probability. In this work, we show that less TA work is required when these probabilities are allowed to depend on the grades that students report. We prove this result in a model with two possible grades, arbitrary numbers of agents, no requirement that students grade multiple assignments, arbitrary but homogeneous noisy observation of the ground truth which students can pay a fixed cost to observe, and the possibility of misreporting grades before or after observing this signal. We give necessary and sufficient conditions for our new mechanism's feasibility, prove its optimality under these assumptions, and characterize its improvement over the previous state of the art both analytically and empirically. Finally, we relax our homogeneity assumption, allowing each student and TA to observe the ground truth according to a different noise model.
We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is … We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer and if the value distribution conditioning on each signal is regular, the gap between the two is bounded by a multiplicative factor of 3. We give examples showing that both conditions are necessary for a constant bound on the gap to hold.We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if the distribution is jointly regular. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data … Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough" -- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $\log \log$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.
We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is … We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer, the gap between the two is bounded by a multiplicative factor of 3, if the value distribution conditioning on each signal is regular. We give examples showing that both conditions are necessary for a constant bound to hold. We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if each bidder's value conditioning on the others' is drawn according to a regular distribution. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves (n-1)/n of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from 0.5 to 0.512 of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
We study combinatorial auctions where each item is sold separately but simultaneously via a second price auction. We ask whether it is possible to efficiently compute in this game a … We study combinatorial auctions where each item is sold separately but simultaneously via a second price auction. We ask whether it is possible to efficiently compute in this game a pure Nash equilibrium with social welfare close to the optimal one. We show that when the valuations of the bidders are submodular, in many interesting settings (e.g., constant number of bidders, budget additive bidders) computing an equilibrium with good welfare is essentially as easy as computing, completely ignoring incentives issues, an allocation with good welfare. On the other hand, for subadditive valuations, we show that computing an equilibrium requires exponential communication. Finally, for XOS (a.k.a. fractionally subadditive) valuations, we show that if there exists an efficient algorithm that finds an equilibrium, it must use techniques that are very different from the ones currently known.
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the … We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the … We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.
Crémer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We … Crémer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We study whether this phenomenon persists when the auctioneer has only incomplete knowledge of the distribution, represented by a finite family of candidate distributions, and has sample access to the real distribution. We show that the naive approach which uses samples to distinguish candidate distributions may fail, whereas an extended version of the Crémer-McLean auction simultaneously extracts full social surplus under each candidate distribution. With an algebraic argument, we give a tight bound on the number of samples needed by this auction, which is the difference between the number of candidate distributions and the dimension of the linear space they span.
Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We … Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We study whether this phenomenon persists when the auctioneer has only incomplete knowledge of the distribution, represented by a finite family of candidate distributions, and has sample access to the real distribution. We show that the naive approach which uses samples to distinguish candidate distributions may fail, whereas an extended version of the Cr\'emer-McLean auction simultaneously extracts full social surplus under each candidate distribution. With an algebraic argument, we give a tight bound on the number of samples needed by this auction, which is the difference between the number of candidate distributions and the dimension of the linear space they span.
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 … The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over … We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, … We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, a group of participants compete to receive service from a mechanism that can provide such services at a cost. The mechanism wishes to choose which agents to serve in order to maximize social efficiency, but is not willing to suffer an expected loss: the agents' payments should cover the cost of service in expectation.
We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, … We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, a group of participants compete to receive service from a mechanism that can provide such services at a cost. The mechanism wishes to choose which agents to serve in order to maximize social efficiency, but is not willing to suffer an expected loss: the agents' payments should cover the cost of service in expectation.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over … We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. … Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. We study the efficiency of Bayes-Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. While it is known that the social welfare of every pure Nash equilibrium (NE) constitutes a constant fraction of the optimal social welfare, a pure NE rarely exists, and moreover, the full information assumption is often unrealistic. Therefore, quantifying the welfare loss in Bayes-Nash equilibria is of particular interest. Previous work established a logarithmic bound on the ratio between the social welfare of a BNE and the expected optimal social welfare in both first-price auctions (Hassidim et al., 2011) and second-price auctions (Bhawalkar and Roughgarden, 2011), leaving a large gap between a constant and a logarithmic ratio. We introduce a new proof technique and use it to resolve both of these gaps in a unified way. Specifically, we show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions.
Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose … Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose testing complexity is unknown. Three Boolean functions $f_1$, $f_2$ and $f_3: \mathbb{F}_2^k \to \{0, 1\}$ are said to be triangle free if there is no $x, y \in \mathbb{F}_2^k$ such that $f_1(x) = f_2(y) = f_3(x + y) = 1$. This property is known to be strongly testable (Green 2005), but the number of queries needed is upper-bounded only by a tower of twos whose height is polynomial in $1 / \epsislon$, where $\epsislon$ is the distance between the tested function triple and triangle-freeness, i.e., the minimum fraction of function values that need to be modified to make the triple triangle free. A lower bound of $(1 / ε)^{2.423}$ for any one-sided tester was given by Bhattacharyya and Xie (2010). In this work we improve this bound to $(1 / ε)^{6.619}$. Interestingly, we prove this by way of a combinatorial construction called \emph{uniquely solvable puzzles} that was at the heart of Coppersmith and Winograd's renowned matrix multiplication algorithm.
Simultaneous item auctions are simple procedures for allocating items to bidders with potentially complex preferences over different item sets. In a simultaneous auction, every bidder submits bids on all items … Simultaneous item auctions are simple procedures for allocating items to bidders with potentially complex preferences over different item sets. In a simultaneous auction, every bidder submits bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. Such procedures occur in practice (e.g. eBay) but are not truthful. We study the efficiency of Bayesian Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. We show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions. These results improve upon the previously-known logarithmic bounds, which were established by [Hassidim, Kaplan, Mansour and Nisan '11] for first-price auctions and by [Bhawalkar and Roughgarden '11] for second-price auctions.
We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; … We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; however, the mechanism may additionally choose to bundle unconstrained attributes such as payments or add-ons with the service. An agent's preference over service and attributes is given by her private type and may be multi-dimensional and non-linear. A single-agent problem is to optimizes a menu to offer an agent subject to constraints on the probabilities with which each of the agent's types is served. We give computationally tractable reductions from multi-agent auction problems to these single-agent problems. Our discussion focuses on maximizing revenue, but our results can be applied to other objectives (e.g., welfare).
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists … We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+ε)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5/3 approximation to the revenue achievable by an optimal deterministic truthful mechanism.
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists … We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a $(\frac 3 2+\epsilon)$-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the $\frac 5 3$-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.
This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication. This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication.
If a two-player social welfare maximization problem does not admit a PTAS, we prove that any maximal-in-range truthful mechanism that runs in polynomial time cannot achieve an approximation factor better … If a two-player social welfare maximization problem does not admit a PTAS, we prove that any maximal-in-range truthful mechanism that runs in polynomial time cannot achieve an approximation factor better than 1/2. Moreover, for the k-player version of the same problem, the hardness of approximation improves to 1/k under the same two-player hardness assumption. (We note that 1/k is achievable by a trivial deterministic maximal-in-range mechanism.) This hardness result encompasses not only deterministic maximal-in-range mechanisms, but also all universally-truthful randomized maximal in range algorithms, as well as a class of strictly more powerful truthful-in-expectation randomized mechanisms recently introduced by Dobzinski and Dughmi. Our result applies to any class of valuation functions that satisfies some minimal closure properties. These properties are satisfied by the valuation functions in all well-studied APX-hard social welfare maximization problems, such as coverage, submodular, and subadditive valuations. We also prove a stronger result for universally-truthful maximal-in-range mechanisms. Namely, even for the class of budgeted additive valuations, which admits an FPTAS, no such mechanism can achieve an approximation factor better than 1/k in polynomial time.
Let R be a ring.A monomorphism f:R→R is a power endomorphism of f:x→xn for some n1.In this paper,we will completely characterize all power endomorphisms of rings without zero divisor. Let R be a ring.A monomorphism f:R→R is a power endomorphism of f:x→xn for some n1.In this paper,we will completely characterize all power endomorphisms of rings without zero divisor.
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists … We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+ε)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5/3 approximation to the revenue achievable by an optimal deterministic truthful mechanism.
We study the makespan minimization problem with unrelated selfish machines under the assumption that job sizes are stochastic. We design simple truthful mechanisms that under different distributional assumptions provide constant … We study the makespan minimization problem with unrelated selfish machines under the assumption that job sizes are stochastic. We design simple truthful mechanisms that under different distributional assumptions provide constant and sublogarithmic approximations to expected makespan. Our mechanisms are prior-independent in that they do not rely on knowledge of the job size distributions. Prior-independent approximations were previously known only for the revenue maximization objective [13, 11, 26]. In contrast to our results, in prior-free settings no truthful anonymous deterministic mechanism for the makespan objective can provide a sublinear approximation [3].
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by … The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over … We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
We pose and study a fundamental algorithmic problem which we term mixture selection, arising as a building block in a number of game-theoretic applications: Given a function g from the … We pose and study a fundamental algorithmic problem which we term mixture selection, arising as a building block in a number of game-theoretic applications: Given a function g from the n-dimensional hypercube to the bounded interval [-1, 1], and an n × rn matrix A with bounded entries, maximize g(Ax) over x in the m-dimensional simplex. This problem arises naturally when one seeks to design a lottery over items for sale in an auction, or craft the posterior beliefs for agents in a Bayesian game through the provision of information (a.k.a. signaling). We present an approximation algorithm for this problem when g simultaneously satisfies two "smoothness" properties: Lipschitz continuity with respect to the L <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sup> norm, and noise stability. The latter notion, which we define and cater to our setting, controls the degree to which low-probability - and possibly correlated - errors in the inputs of g can impact its output. The approximation guarantee of our algorithm degrades gracefully as a function of the Lipschitz continuity and noise stability of g. In particular, when g is both 0(1)-Lipschitz continuous and 0(1)-stable, we obtain an (additive) polynomial-time approximation scheme (PTAS) for mixture selection. We also show that neither assumption suffices by itself for an additive PTAS, and both assumptions together do not suffice for an additive fully polynomial-time approximation scheme (FPTAS). We apply our algorithm for mixture selection to a number of different game-theoretic applications, focusing on problems from mechanism design and optimal signaling. In particular, we make progress on a number of open problems suggested in prior work by easily reducing them to mixture selection: we resolve an important special case of the small-menu lottery design problem posed by Dughmi, Han, and Nisan [10]; we resolve the problem of revenue-maximizing signaling in Bayesian secondprice auctions posed by Emek et al. [12] and Miltersen and Sheffet [5]; we design a quasipolynomial-time approximation scheme for the optimal signaling problem in normal form games suggested by Dughmi [9]; and we design an approximation algorithm for the optimal signaling problem in the voting model of Alonso and Camara [3].
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, … We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is … In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the … We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the assumption that the buyers' values are additive subject to a matroid feasibility constraint and independent across items. Importantly, different buyers in our setting can have different constraints on the sets of items they desire. Our mechanism is a sequential variant of two-part tariffs. Prior to our work, simple approximation mechanisms for such multi-buyer problems were known only for the special cases of all unit-demand or all additive value buyers.
We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; … We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; however, the mechanism may additionally choose to bundle unconstrained attributes such as payments or add-ons with the service. An agent's preference over service and attributes is given by her private type and may be multi-dimensional and non-linear. A single-agent problem is to optimizes a menu to offer an agent subject to constraints on the probabilities with which each of the agent's types is served. We give computationally tractable reductions from multi-agent auction problems to these single-agent problems. Our discussion focuses on maximizing revenue, but our results can be applied to other objectives (e.g., welfare).
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over … We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. … We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. We show that if their full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy. We show how some proofs from the literature (item bidding auctions, greedy auctions) can be cast as smoothness proofs or be simplified using smoothness. For first price item bidding with fractionally subadditive bidders we actually manage to improve by much the existing result \cite{Hassidim2011a} from 4 to $\frac{e}{e-1}\approx 1.58$. This also shows a very interesting separation between first and second price item bidding since second price item bidding has PoA at least 2 even under complete information. For a larger class of Bayesian Games where the strategy space of a player also changes with his type we are able to show that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound. We show how weighted congestion games actually satisfy this stronger definition of smoothness. This allows us to show that the inefficiency bounds of weighted congestion games known in the literature carry over to incomplete versions where the weights of the players are private information. We also show how an incomplete version of a natural class of monotone valid utility games, called effort market games are universally $(1,1)$-smooth. Hence, we show that incomplete versions of effort market games where the abilities of the players and their budgets are private information has Bayes-Nash PoA at most 2.
Contention resolution schemes have proven to be an incredibly powerful concept which allows tackling a broad class of problems. The framework has been initially designed to handle submodular optimization under … Contention resolution schemes have proven to be an incredibly powerful concept which allows tackling a broad class of problems. The framework has been initially designed to handle submodular optimization under various types of constraints, that is, intersections of exchange systems (including matroids), knapsacks, and unsplittable flows on trees. Later on, it turned out that this framework perfectly extends to optimization under uncertainty, like stochastic probing and online selection problems, which further can be applied to mechanism design. We add to this line of work by showing how to create contention resolution schemes for intersection of matroids and knapsacks when we work in the random order setting. More precisely, we do know the whole universe of elements in advance, but they appear in an order given by a random permutation. Upon arrival we need to irrevocably decide whether to take an element or not. We bring a novel technique for analyzing procedures in the random order setting that is based on the martingale theory. This unified approach makes it easier to combine constraints, and we do not need to rely on the monotonicity of contention resolution schemes, as it was the case before. Our paper fills the gaps, extends, and creates connections between many previous results and techniques. The main application of our framework is a k + 4 + ε approximation ratio for the Bayesian multi-parameter unit-demand mechanism design under the constraint of k matroids intersection, which improves upon the previous bounds of 4k - 2 and e(k + 1). Other results include improved approximation ratios for stochastic k-set packing and submodular stochastic probing over arbitrary nonnegative submodular objective function, whereas previous results required the objective to be monotone.
With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little … With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games. We therefore introduce smooth markets (SM-games), a class of n-player games with pairwise zero sum interactions. SM-games codify a common design pattern in machine learning that includes (some) GANs, adversarial training, and other recent algorithms. We show that SM-games are amenable to analysis and optimization using first-order methods.
We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all ϵ >; 0, we … We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all ϵ >; 0, we obtain a (1 + ϵ)-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">poly(log r)</sup> when sampled from general distributions supported on a set [u <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">min</sub> ,ru <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">min</sub> ]. We also provide an additive PTAS for all bounded distributions. Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all ϵ >; 0, g(1/ϵ) distinct prices suffice to obtain a (1 + ϵ)-factor approximation to the optimal revenue for MHR distributions, where g(1/ϵ) is a quasi-linear function of 1/ϵ that does not depend on the number of items. Similarly, for all ϵ >; 0 and n >; 0, g(1/ϵ · log n) distinct prices suffice for regular distributions, where n is the number of items and g(·) is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of 1/ϵ, a single price suffices to achieve a (1 + ϵ)-factor approximation. Our results represent significant progress to the single-bidder case of the multidimensional optimal mechanism design problem, following Myerson's celebrated work on optimal mechanism design [Myerson 1981].
Motivated by the pursuit of a systematic computational and algorithmic understanding of Generative Adversarial Networks (GANs), we present a simple yet unified non-asymptotic local convergence theory for smooth two-player games, … Motivated by the pursuit of a systematic computational and algorithmic understanding of Generative Adversarial Networks (GANs), we present a simple yet unified non-asymptotic local convergence theory for smooth two-player games, which subsumes several discrete-time gradient-based saddle point dynamics. The analysis reveals the surprising nature of the off-diagonal interaction term as both a blessing and a curse. On the one hand, this interaction term explains the origin of the slow-down effect in the convergence of Simultaneous Gradient Ascent (SGA) to stable Nash equilibria. On the other hand, for the unstable equilibria, exponential convergence can be proved thanks to the interaction term, for four modified dynamics proposed to stabilize GAN training: Optimistic Mirror Descent (OMD), Consensus Optimization (CO), Implicit Updates (IU) and Predictive Method (PM). The analysis uncovers the intimate connections among these stabilizing techniques, and provides detailed characterization on the choice of learning rate. As a by-product, we present a new analysis for OMD proposed in Daskalakis, Ilyas, Syrgkanis, and Zeng [2017] with improved rates.
We show that the matroid secretary problem is equivalent to correlated contention resolution in the online random-order model. Specifically, the matroid secretary conjecture is true if and only if every … We show that the matroid secretary problem is equivalent to correlated contention resolution in the online random-order model. Specifically, the matroid secretary conjecture is true if and only if every matroid admits an online random-order contention resolution scheme which, given an arbitrary (possibly correlated) prior distribution over subsets of the ground set, matches the balance ratio of the best offline scheme for that distribution up to a constant. We refer to such a scheme as universal. Our result indicates that the core challenge of the matroid secretary problem lies in resolving contention for positively correlated inputs, in particular when the positive correlation is benign in as much as offline contention resolution is concerned. Our result builds on our previous work which establishes one direction of this equivalence, namely that the secretary conjecture implies universal random-order contention resolution, as well as a weak converse, which derives a matroid secretary algorithm from a random-order contention resolution scheme with only partial knowledge of the distribution. It is this weak converse that we strengthen in this paper: We show that universal random-order contention resolution for matroids, in the usual setting of a fully known prior distribution, suffices to resolve the matroid secretary conjecture in the affirmative. Our proof is the composition of three reductions. First, we use duality arguments to reduce the matroid secretary problem to the matroid prophet secretary problem with arbitrarily correlated distributions. Second, we introduce a generalization of contention resolution we term labeled contention resolution, to which we reduce the correlated matroid prophet secretary problem. Finally, we combine duplication of elements with limiting arguments to reduce labeled contention resolution to classical contention resolution.
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- … For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- work can be applied to any setting which roughly satisfies the following assumptions: (i) the buyer's types must be distributed independently (not necessarily identically), (ii) the objective function must be linearly separable over the set of buyers, and (iii) the supply constraints must be the only constraints involving more than one buyer. Our framework is general in the sense that it makes no explicit assumption about any of the following: (i) the buyer's valuations (e.g., submodular, additive, etc), (ii) The distribution of types for each buyer, and (iii) the other constraints involving individual buyers (e.g., budget constraints, etc). We present two generic ra-buyer mechanisms that use 1- buyer mechanisms as black boxes. Assuming that we have an α-approximate 1-buyer mechanism for each buyer and assuming that no buyer ever needs more than 1/k of all copies of each item for some integer k ≥ 1, then our generic n- buyer mechanisms are γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> · α-approximation of the optimal n-buyer mechanism, in which γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is a constant which is at least 1 - 1/√(k+3). Observe that γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is at least1/2 (for k = 1) and approaches 1 as k increases. As a byproduct of our construction, we improve a generalization of prophet inequalities. Furthermore, as applications of our main theorem, we improve several results from the literature.
We propose a Three-Player Generative Adversarial Network to improve classification networks. In addition to the game played between the discriminator and generator, a competition is introduced between the generator and … We propose a Three-Player Generative Adversarial Network to improve classification networks. In addition to the game played between the discriminator and generator, a competition is introduced between the generator and the classifier. The generator's objective is to synthesize samples that are both realistic and hard to label for the classifier. Even though we make no assumptions on the type of augmentations to learn, we find that the model is able to synthesize realistically looking examples that are hard for the classification model. Furthermore, the classifier becomes more robust when trained on these difficult samples. The method is evaluated on a public dataset for traffic sign recognition.
Signaling is an important topic in the study of asymmetric information in economic settings. In particular, the transparency of information available to a seller in an auction setting is a … Signaling is an important topic in the study of asymmetric information in economic settings. In particular, the transparency of information available to a seller in an auction setting is a question of major interest. We introduce the study of signaling when conducting a second price auction of a probabilistic good whose actual instantiation is known to the auctioneer but not to the bidders. This framework can be used to model impressions selling in display advertising. We establish several results within this framework. First, we study the problem of computing a signaling scheme that maximizes the auctioneer’s revenue in a Bayesian setting. We show that this problem is polynomially solvable for some interesting special cases, but computationally hard in general. Second, we establish a tight bound on the minimum number of signals required to implement an optimal signaling scheme. Finally, we show that at least half of the maximum social welfare can be preserved within such a scheme.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or … Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality - a property which we call coherence. We first show that ordinary, "vanilla" MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an "extra-gradient" step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. (2018) for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for establishing convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, as well as the CelebA and CIFAR-10 datasets).
The buying and selling of information is taking place at a scale unprecedented in the history of commerce, thanks to the formation of online marketplaces for user data. Data providing … The buying and selling of information is taking place at a scale unprecedented in the history of commerce, thanks to the formation of online marketplaces for user data. Data providing agencies sell user information to advertisers to allow them to match ads to viewers more effectively. In this paper we study the design of optimal mechanisms for a monopolistic data provider to sell information to a buyer, in a model where both parties have (possibly correlated) private signals about a state of the world, and the buyer uses information learned from the seller, along with his own signal, to choose an action (e.g., displaying an ad) whose payoff depends on the state of the world.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for … We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.
Auctions in which agents' payoffs are random variables have received increased attention in recent years. In particular, recent work in algorithmic mechanism design has produced mechanisms employing internal randomization, partly … Auctions in which agents' payoffs are random variables have received increased attention in recent years. In particular, recent work in algorithmic mechanism design has produced mechanisms employing internal randomization, partly in response to limitations on deterministic mechanisms imposed by computational complexity. For many of these mechanisms, which are often referred to as truthful-in-expectation, incentive compatibility is contingent on the assumption that agents are risk-neutral. These mechanisms have been criticized on the grounds that this assumption is too strong, because "real" agents are typically risk averse, and moreover their precise attitude towards risk is typically unknown a-priori. In response, researchers in algorithmic mechanism design have sought the design of universally-truthful mechanisms --- mechanisms for which incentive-compatibility makes no assumptions regarding agents' attitudes towards risk. We show that any truthful-in-expectation mechanism can be generically transformed into a mechanism that is incentive compatible even when agents are risk averse, without modifying the mechanism's allocation rule. The transformed mechanism does not require reporting of agents' risk profiles. Equivalently, our result can be stated as follows: Every (randomized) allocation rule that is implementable in dominant strategies when players are risk neutral is also implementable when players are endowed with an arbitrary and unknown concave utility function for money.
We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures … We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures all of the most prominent examples of "simple" auctions, including anonymous and non-anonymous item and bundle pricings, with either a single or multiple buyers. The technique we propose is to break the analysis of auctions into two natural pieces. First, one shows that the set of allocation rules have large amounts of structure; second, fixing an allocation on a sample, one shows that the set of auctions agreeing with this allocation on that sample have revenue functions with low dimensionality. Our results effectively imply that whenever it's possible to compute a near-optimal simple auction with a known prior, it is also possible to compute such an auction with an unknown prior (given a polynomial number of samples).
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived … We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived from the buyer's type distribution satisfies certain stochastic dominance conditions.This measure expresses the marginal change in the seller's revenue under marginal changes in the rent paid to subsets of buyer types.As a corollary, we characterize the optimality of grand-bundling mechanisms, strengthening several results in the literature, where only sufficient optimality conditions have been derived.As an application, we show that the optimal mechanism for n independent uniform items each supported on [c, c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for 2 items [Pav11].At the same time, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c, c + 1] is not a grand bundling mechanism.
Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. … Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called "bandits with knapsacks", that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems. We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel "balanced exploration" paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.
We consider the problem of designing revenue maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential … We consider the problem of designing revenue maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential buyers ("agents") that are arriving sequentially. Each agent is interested in buying one item. Each agent's value for an item is an independent sample from some fixed (but unknown) distribution with support [0,1]. The seller offers a take-it-or-leave-it price to each arriving agent (possibly different for different agents), and aims to maximize his expected revenue.
Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the … Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any … For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) buyers' types must be distributed independently (not necessarily identically), (ii) objective function must be linearly separable over the buyers, and (iii) except for the supply constraints, there should be no other inter-buyer constraints. Our framework is general in the sense that it makes no explicit assumption about buyers' valuations, type distributions, and single buyer constraints (e.g., budget, incentive compatibility, etc). We present two generic multi buyer mechanisms which use single buyer mechanisms as black boxes; if an $\alpha$-approximate single buyer mechanism can be constructed for each buyer, and if no buyer requires more than $\frac{1}{k}$ of all units of each item, then our generic multi buyer mechanisms are $\gamma_k\alpha$-approximation of the optimal multi buyer mechanism, where $\gamma_k$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. Observe that $\gamma_k$ is at least 1/2 (for $k=1$) and approaches 1 as $k \to \infty$. As a byproduct of our construction, we present a generalization of prophet inequalities. Furthermore, as applications of our framework, we present multi buyer mechanisms with improved approximation factor for several settings from the literature.
In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods. We show that both of … In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods. We show that both of these algorithms admit a unified analysis as approximations of the classical proximal point method for solving saddle point problems. This viewpoint enables us to develop a new framework for analyzing EG and OGDA for bilinear and strongly convex-strongly concave settings. Moreover, we use the proximal point approximation interpretation to generalize the results for OGDA for a wide range of parameters.
We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid … We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular, when $f$ may be a nonmonotone function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over a downward-closed polytope $P$ described by an efficient separation oracle. Previously this was known only for monotone functions. For nonmonotone functions, a constant factor was known only when the polytope was either the intersection of a fixed number of knapsack constraints or a matroid polytope. Second, we show that contention resolution schemes are an effective way to round a fractional solution, even when $f$ is nonmonotone. In particular, contention resolution schemes for different polytopes can be combined to handle the intersection of different constraints. Via linear programming duality we show that a contention resolution scheme for a constraint is related to the correlation gap of weighted rank functions of the constraint. This leads to an optimal contention resolution scheme for the matroid polytope. Our results provide a broadly applicable framework for maximizing linear and submodular functions subject to independence constraints. We give several illustrative examples. Contention resolution schemes may find other applications.
Summary While studying the median of the binomial distribution we discovered that the mean median‐mode inequality, recently discussed in. Statistics Neerlandica by R unnen ‐ burg 141 and V an … Summary While studying the median of the binomial distribution we discovered that the mean median‐mode inequality, recently discussed in. Statistics Neerlandica by R unnen ‐ burg 141 and V an Z wet [7] for continuous distributions, does not hold for the binomial distribution. If the mean is an integer, then mean = median = mode. In theorem 1 a sufficient condition is given for mode = median = rounded mean. If median and mode differ, the mean lies in between.
Let $S$ be the number of successes in $n$ independent trials, and let $p_j$ denote the probability of success in the $j$th trial, $j = 1, 2, \cdots, n$ (Poisson … Let $S$ be the number of successes in $n$ independent trials, and let $p_j$ denote the probability of success in the $j$th trial, $j = 1, 2, \cdots, n$ (Poisson trials). We consider the problem of finding the maximum and the minimum of $Eg(S),$ the expected value of a given real-valued function of $S,$ when $ES = np$ is fixed. It is well known that the maximum of the variance of $S$ is attained when $p_1 = p_2 = \cdots = p_n = p.$ This can be interpreted as showing that the variability in the number of successes is highest when the successes are equally probable (Bernoulli trials). This interpretation is further supported by the following two theorems, proved in this paper. If $b$ and $c$ are two integers, $0 \leqq b \leqq np \leqq c \leqq n,$ the probability $P(b \leqq S \leqq c)$ attains its minimum if and only if $p_1 = p_2 = \cdots = p_n = p,$ unless $b = 0$ and $c = n$ (Theorem 5, a corollary of Theorem 4, which gives the maximum and the minimum of $P(S \leqq c)$). If $g$ is a strictly convex function, $Eg(S)$ attains its maximum if and only if $p_1 = p_2 = \cdots = p_n = p$ (Theorem 3). These results are obtained with the help of two theorems concerning the extrema of the expected value of an arbitrary function $g(S)$ under the condition $ES = np.$ Theorem 1 gives necessary conditions for the maximum and the minimum of $Eg(S).$ Theorem 2 gives a partial characterization of the set of points at which an extremum is attained. Corollary 2.1 states that the maximum and the minimum are attained when $p_1, p_2, \cdots, p_n$ take on, at most, three different values, only one of which is distinct from 0 and 1. Applications of Theorems 3 and 5 to problems of estimation and testing are pointed out in Section 5.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves (n-1)/n of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from 0.5 to 0.512 of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
We develop an automated approach for designing matrix multiplication algorithms based on constructions similar to the Coppersmith-Winograd construction. Using this approach we obtain a new improved bound on the matrix … We develop an automated approach for designing matrix multiplication algorithms based on constructions similar to the Coppersmith-Winograd construction. Using this approach we obtain a new improved bound on the matrix multiplication exponent ω<2.3727.
We explore the properties of a congestion game in which users of a congested resource anticipate the effect of their actions on the price of the resource. When users are … We explore the properties of a congestion game in which users of a congested resource anticipate the effect of their actions on the price of the resource. When users are sharing a single resource, we establish that the aggregate utility received by the users is at least 3/4 of the maximum possible aggregate utility. We also consider extensions to a network context, where users submit individual payments for each link in the network they may wish to use. In this network model, we again show that the selfish behavior of the users leads to an aggregate utility that is no worse than 3/4 of the maximum possible aggregate utility. We also show that the same analysis extends to a wide class of resource allocation systems where end users simultaneously require multiple scarce resources. These results form part of a growing literature on the “price of anarchy,” i.e., the extent to which selfish behavior affects system efficiency.
We further develop the group-theoretic approach to fast matrix multiplication introduced by Cohn and Umans, and for the first time use it to derive algorithms asymptotically faster than the standard … We further develop the group-theoretic approach to fast matrix multiplication introduced by Cohn and Umans, and for the first time use it to derive algorithms asymptotically faster than the standard algorithm. We describe several families of wreath product groups that achieve matrix multiplication exponent less than 3, the asymptotically fastest of which achieves exponent 2.41. We present two conjectures regarding specific improvements, one combinatorial and the other algebraic. Either one would imply that the exponent of matrix multiplication is 2.
We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand … We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand constraints, appropriately extending Myerson's single-dimensional result [21] to this setting. We also show that every feasible Bayesian auction - including in particular the revenue-optimal one - can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type ti is transformed into a virtual type fi(ti), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black-box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. Our results are computationally efficient for all multidimensional settings where the bidders are additive, or can be efficiently mapped to be additive, albeit the feasibility and demand constraints may still remain arbitrary combinatorial. In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The runtime can be further improved to polynomial in only the number of items and the number of bidders in itemsymmetric settings by making use of techniques from [15].
We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution D. The seller has "data" about D in the form … We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution D. The seller has "data" about D in the form of m ≥ 1 i.i.d. samples, and the algorithmic challenge is to use these samples to obtain expected revenue as close as possible to what could be achieved with advance knowledge of D.