In online combinatorial allocations/auctions, n bidders sequentially arrive, each with a combinatorial valuation (such as submodular/XOS) over subsets of m indivisible items. The aim is to immediately allocate a subset …
In online combinatorial allocations/auctions, n bidders sequentially arrive, each with a combinatorial valuation (such as submodular/XOS) over subsets of m indivisible items. The aim is to immediately allocate a subset of the remaining items to maximize the total welfare, defined as the sum of bidder valuations. A long line of work has studied this problem when the bidder valuations come from known independent distributions. In particular, for submodular/XOS valuations, we know 2-competitive algorithms/mechanisms that set a fixed price for each item and the arriving bidders take their favorite subset of the remaining items given these prices. However, these algorithms traditionally presume the availability of the underlying distributions as part of the input to the algorithm. Contrary to this assumption, practical scenarios often require the learning of distributions, a task complicated by limited sample availability. This paper investigates the feasibility of achieving O(1)-competitive algorithms under the realistic constraint of having access to only a limited number of samples from the underlying bidder distributions. Our first main contribution shows that a mere single sample from each bidder distribution is sufficient to yield an O(1)-competitive algorithm for submodular/XOS valuations. This result leverages a novel extension of the secretary-style analysis, employing the sample to have the algorithm compete against itself. Although online, this first approach does not provide an online truthful mechanism. Our second main contribution shows that a polynomial number of samples suffices to yield a $(2+\epsilon)$-competitive online truthful mechanism for submodular/XOS valuations and any constant $\epsilon>0$. This result is based on a generalization of the median-based algorithm for the single-item prophet inequality problem to combinatorial settings with multiple items.
We consider a principal seller with $m$ heterogeneous products to sell to an additive buyer over independent items. The principal can offer an arbitrary menu of product bundles, but faces …
We consider a principal seller with $m$ heterogeneous products to sell to an additive buyer over independent items. The principal can offer an arbitrary menu of product bundles, but faces competition from smaller and more agile single-item sellers. The single-item sellers choose their prices after the principal commits to a menu, potentially under-cutting the principal's offerings. We explore to what extent the principal can leverage the ability to bundle product together to extract revenue. Any choice of menu by the principal induces an oligopoly pricing game between the single-item sellers, which may have multiple equilibria. When there is only a single item this model reduces to Bertrand competition, for which the principal's revenue is $0$ at any equilibrium, so we assume that no single item's value is too dominant. We establish an upper bound on the principal's optimal revenue at every equilibrium: the expected welfare after truncating each item's value to its revenue-maximizing price. Under a technical condition on the value distributions -- that the monopolist's revenue is sufficiently sensitive to price -- we show that the principal seller can simply price the grand-bundle and ensure (in any equilibrium) a constant approximation to this bound (and hence to the optimal revenue). We also show that for some value distributions violating our conditions, grand-bundle pricing does not yield a constant approximation to the optimal revenue in any equilibrium.
We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's …
We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's behalf, subject to advertiser constraints such as budgets. Crucially, these constraints are strategically chosen by the advertisers. The chosen constraints define an "inner" budget-pacing game for the autobidders. Advertiser payoffs in the constraint-choosing "metagame" are determined by the equilibrium reached by the autobidders. Advertiser preferences can be more general than what is implied by their constraints: we assume only that they have weakly decreasing marginal value for clicks and weakly increasing marginal disutility for spending money. Nevertheless, we show that at any pure Nash equilibrium of the metagame, the resulting allocation obtains at least half of the liquid welfare of any allocation and this bound is tight. We also obtain a 4-approximation for any mixed Nash equilibrium or Bayes-Nash equilibria. These results rely on the power to declare budgets: if advertisers can specify only a (linear) value per click or an ROI target but not a budget constraint, the approximation factor at equilibrium can be as bad as linear in the number of advertisers.
Traditionally, AI has been modeled within economics as a technology that impacts payoffs by reducing costs or refining information for human agents. Our position is that, in light of recent …
Traditionally, AI has been modeled within economics as a technology that impacts payoffs by reducing costs or refining information for human agents. Our position is that, in light of recent advances in generative AI, it is increasingly useful to model AI itself as an economic agent. In our framework, each user is augmented with an AI agent and can consult the AI prior to taking actions in a game. The AI agent and the user have potentially different information and preferences over the communication, which can result in equilibria that are qualitatively different than in settings without AI.
We study the problem of a principal who wants to influence an agent's observable action, subject to an ex-post budget. The agent has a private type determining their cost function. …
We study the problem of a principal who wants to influence an agent's observable action, subject to an ex-post budget. The agent has a private type determining their cost function. This paper endogenizes the value of the resource driving incentives, which holds no inherent value but is restricted by finite availability. We characterize the optimal mechanism, showing the emergence of a pooling region where the budget constraint binds for low-cost types. We then introduce a linear value for the transferable resource; as the principal's value increases, the mechanism demands more from agents with binding budget constraint but less from others.
We introduce a model of online algorithms subject to strict constraints on data retention. An online learning algorithm encounters a stream of data points, one per round, generated by some …
We introduce a model of online algorithms subject to strict constraints on data retention. An online learning algorithm encounters a stream of data points, one per round, generated by some stationary process. Crucially, each data point can request that it be removed from memory $m$ rounds after it arrives. To model the impact of removal, we do not allow the algorithm to store any information or calculations between rounds other than a subset of the data points (subject to the retention constraints). At the conclusion of the stream, the algorithm answers a statistical query about the full dataset. We ask: what level of performance can be guaranteed as a function of $m$? We illustrate this framework for multidimensional mean estimation and linear regression problems. We show it is possible to obtain an exponential improvement over a baseline algorithm that retains all data as long as possible. Specifically, we show that $m = \textsc{Poly}(d, \log(1/\epsilon))$ retention suffices to achieve mean squared error $\epsilon$ after observing $O(1/\epsilon)$ $d$-dimensional data points. This matches the error bound of the optimal, yet infeasible, algorithm that retains all data forever. We also show a nearly matching lower bound on the retention required to guarantee error $\epsilon$. One implication of our results is that data retention laws are insufficient to guarantee the right to be forgotten even in a non-adversarial world in which firms merely strive to (approximately) optimize the performance of their algorithms. Our approach makes use of recent developments in the multidimensional random subset sum problem to simulate the progression of stochastic gradient descent under a model of adversarial noise, which may be of independent interest.
We develop a model of content filtering as a game between the filter and the content consumer, where the latter incurs information costs for examining the content. Motivating examples include …
We develop a model of content filtering as a game between the filter and the content consumer, where the latter incurs information costs for examining the content. Motivating examples include censoring misinformation, spam/phish filtering, and recommender systems acting on a stream of content. When the attacker is exogenous, we show that improving the filter’s quality is weakly Pareto improving, but has no impact on equilibrium payoffs until the filter becomes sufficiently accurate. Further, if the filter does not internalize the consumer’s information costs, its lack of commitment power may render it useless and lead to inefficient outcomes. When the attacker is also strategic, improvements in filter quality may decrease equilibrium payoffs.
When deployed in the world, a learning agent such as a recommender system or a chatbot often repeatedly interacts with another learning agent (such as a user) over time. In …
When deployed in the world, a learning agent such as a recommender system or a chatbot often repeatedly interacts with another learning agent (such as a user) over time. In many such two-agent systems, each agent learns separately and the rewards of the two agents are not perfectly aligned. To better understand such cases, we examine the learning dynamics of the two-agent system and the implications for each agent's objective. We model these systems as Stackelberg games with decentralized learning and show that standard regret benchmarks (such as Stackelberg equilibrium payoffs) result in worst-case linear regret for at least one player. To better capture these systems, we construct a relaxed regret benchmark that is tolerant to small learning errors by agents. We show that standard learning algorithms fail to provide sublinear regret, and we develop algorithms to achieve near-optimal $O(T^{2/3})$ regret for both players with respect to these benchmarks. We further design relaxed environments under which faster learning ($O(\sqrt{T})$) is possible. Altogether, our results take a step towards assessing how two-agent interactions in sequential and decentralized learning environments affect the utility of both agents.
We study mechanisms for selling a single item when buyers have private costs for participating in the mechanism. An agent's participation cost can also be interpreted as an outside option …
We study mechanisms for selling a single item when buyers have private costs for participating in the mechanism. An agent's participation cost can also be interpreted as an outside option value that she must forego to participate. This substantially changes the revenue maximization problem, which becomes non- convex in the presence of participation costs. For multiple buyers, we show how to construct a (2 + ɛ)- approximately revenue-optimal mechanism in polynomial time. Our approach makes use of a many-buyers-to-single-buyer reduction, and in the single-buyer case our mechanism improves to an FPTAS. We also bound the menu size and the sample complexity for the optimal single-buyer mechanism. Moreover, we show that posting a single price in the single-buyer case is in fact optimal under the assumption that either (1) the participation cost is independent of the value, and the value distribution has decreasing marginal revenue or monotone hazard rate; or (2) the participation cost is a concave function of the value. When there are multiple buyers, we show that sequential posted pricing guarantees a large fraction of the optimal revenue under similar conditions.
Online content platforms commonly use engagement-based optimization when making recommendations. This encourages content creators to invest in quality, but also rewards gaming tricks such as clickbait. To understand the total …
Online content platforms commonly use engagement-based optimization when making recommendations. This encourages content creators to invest in quality, but also rewards gaming tricks such as clickbait. To understand the total impact on the content landscape, we study a game between content creators competing on the basis of engagement metrics and analyze the equilibrium decisions about investment in quality and gaming. First, we show the content created at equilibrium exhibits a positive correlation between quality and gaming, and we empirically validate this finding on a Twitter dataset. Using the equilibrium structure of the content landscape, we then examine the downstream performance of engagement-based optimization along several axes. Perhaps counterintuitively, the average quality of content consumed by users can decrease at equilibrium as gaming tricks become more costly for content creators to employ. Moreover, engagement-based optimization can perform worse in terms of user utility than a baseline with random recommendations, and engagement-based optimization is also suboptimal in terms of realized engagement relative to quality-based optimization. Altogether, our results highlight the need to consider content creator incentives when evaluating a platform's choice of optimization metric.
We study a game between autobidding algorithms that compete in an online advertising platform. Each autobidder is tasked with maximizing its advertiser's total value over multiple rounds of a repeated …
We study a game between autobidding algorithms that compete in an online advertising platform. Each autobidder is tasked with maximizing its advertiser's total value over multiple rounds of a repeated auction, subject to budget and/or return-on-investment constraints. We propose a gradient-based learning algorithm that is guaranteed to satisfy all constraints and achieves vanishing individual regret. Our algorithm uses only bandit feedback and can be used with the first- or second-price auction, as well as with any "intermediate" auction format. Our main result is that when these autobidders play against each other, the resulting expected liquid welfare over all rounds is at least half of the expected optimal liquid welfare achieved by any allocation. This holds whether or not the bidding dynamics converges to an equilibrium and regardless of the correlation structure between advertiser valuations.
Motivated by applications such as voluntary carbon markets and educational testing, we consider a market for goods with varying but hidden levels of quality in the presence of a third-party …
Motivated by applications such as voluntary carbon markets and educational testing, we consider a market for goods with varying but hidden levels of quality in the presence of a third-party certifier. The certifier can provide informative signals about the quality of products, and can charge for this service. Sellers choose both the quality of the product they produce and a certification. Prices are then determined in a competitive market. Under a single-crossing condition, we show that the levels of certification chosen by producers are uniquely determined at equilibrium. We then show how to reduce a revenue-maximizing certifier's problem to a monopolistic pricing problem with non-linear valuations, and design an FPTAS for computing the optimal slate of certificates and their prices. In general, both the welfare-optimal and revenue-optimal slate of certificates can be arbitrarily large.
We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's …
We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's behalf, subject to advertiser constraints such as budgets. Crucially, these constraints are strategically chosen by the advertisers. The chosen constraints define an "inner'' budget-pacing game for the autobidders. Advertiser payoffs in the constraint-choosing "metagame'' are determined by the equilibrium reached by the autobidders. Advertiser preferences can be more general than what is implied by their constraints: we assume only that they have weakly decreasing marginal value for clicks and weakly increasing marginal disutility for spending money. Nevertheless, we show that at any pure Nash equilibrium of the metagame, the resulting allocation obtains at least half of the liquid welfare of any allocation and this bound is tight. We also obtain a 4-approximation for any mixed Nash equilibrium or Bayes-Nash equilibria. These results rely on the power to declare budgets: if advertisers can specify only a (linear) value per click or an ROI target but not a budget constraint, the approximation factor at equilibrium can be as bad as linear in the number of advertisers.
We study a Bayesian persuasion problem where a sender wants to persuade a receiver to take a binary action, such as purchasing a product. The sender is informed about the …
We study a Bayesian persuasion problem where a sender wants to persuade a receiver to take a binary action, such as purchasing a product. The sender is informed about the (binary) state of the world, such as whether the quality of the product is high or low, but only has limited information about the receiver's beliefs and utilities. Motivated by customer surveys, user studies, and recent advances in generative AI, we allow the sender to learn more about the receiver by querying an oracle that simulates the receiver's behavior. After a fixed number of queries, the sender commits to a messaging policy and the receiver takes the action that maximizes her expected utility given the message she receives. We characterize the sender's optimal messaging policy given any distribution over receiver types. We then design a polynomial-time querying algorithm that optimizes the sender's expected utility in this Bayesian persuasion game. We also consider approximate oracles, more general query structures, and costly queries.
We present a general framework for designing approximately revenue-optimal mechanisms for multi-item additive auctions, which applies to both truthful and nontruthful auctions. Given a (not necessarily truthful) single-item auction format …
We present a general framework for designing approximately revenue-optimal mechanisms for multi-item additive auctions, which applies to both truthful and nontruthful auctions. Given a (not necessarily truthful) single-item auction format satisfying certain technical conditions, we run simultaneous item auctions augmented with a personalized entry fee for each bidder that must be paid before the auction can be accessed. These entry fees depend only on the prior distribution of bidder types and in particular are independent of realized bids. We bound the revenue of the resulting two-part tariff mechanism using a novel geometric technique that enables revenue guarantees for many common nontruthful auctions that previously had none. Our approach adapts and extends the duality framework of Cai, Devanur, and Weinberg [SIAM J. Comput., 50 (2021), pp. STOC16-160–STOC16-200] beyond truthful auctions. Our framework can be used with many common auction formats, such as simultaneous first-price, simultaneous second-price, and simultaneous all-pay auctions. Our results for first-price and all-pay are the first revenue guarantees of nontruthful mechanisms in multidimensional environments, addressing an open question in the literature [T. Roughgarden, V. Syrgkanis, and E. Tardos, J. Artificial Intelligence Res., 59 (2017), pp. 59–101]. If all-pay auctions are used, we prove that the resulting mechanism is also credible in the sense that the auctioneer cannot benefit by deviating from the stated mechanism after observing agent bids. This is the first static credible mechanism for multi-item additive auctions that achieves a constant factor of the optimal revenue. If second-price auctions are used, we obtain a truthful -approximate mechanism with fixed entry fees that are amenable to tuning via online learning techniques.
We study the design of a decentralized two-sided matching market in which agents’ search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences …
We study the design of a decentralized two-sided matching market in which agents’ search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences drawn from known type-specific distributions. Equipped with knowledge of these distributions, the platform guides the search process by determining the meeting rate between each pair of types from the two sides. Focusing on symmetric pairwise preferences in a continuum model, we first characterize the unique stationary equilibrium that arises given a feasible set of meeting rates. We then introduce the platform’s optimal directed search problem, which involves optimizing meeting rates to maximize equilibrium social welfare. We first show that incentive issues arising from congestion and cannibalization make the design problem fairly intricate. Nonetheless, we develop an efficiently computable search design whose corresponding equilibrium achieves at least [Formula: see text] the social welfare of the optimal design. In fact, our construction always recovers at least [Formula: see text] the first-best social welfare, where agents’ incentives are disregarded. Our search design is simple and easy to implement, as its corresponding bipartite graph consists of disjoint stars. Furthermore, our design implies the platform can substantially limit choice and yet induce an equilibrium with approximately optimal welfare. We offer alternative search designs with improved approximation factors for markets with certain special structures. Finally, we show that approximation is likely the best one can hope for by establishing that the problem of designing optimal directed search is [Formula: see text]-hard to even approximate beyond a certain constant factor. This paper was accepted by Omar Besbes, revenue management and market analytics. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2022.4601 .
The tendency for individuals to form social ties with others who are similar to themselves, known as homophily, is one of the most robust sociological principles. Since this phenomenon can …
The tendency for individuals to form social ties with others who are similar to themselves, known as homophily, is one of the most robust sociological principles. Since this phenomenon can lead to patterns of interactions that segregate people along different demographic dimensions, it can also lead to inequalities in access to information, resources, and opportunities. As we consider potential interventions that might alleviate the effects of segregation, we face the challenge that homophily constitutes a pervasive and organic force that is difficult to push back against. Designing effective interventions can therefore benefit from identifying counterbalancing social processes that might be harnessed to work in opposition to segregation. In this work, we show that triadic closure -- another common phenomenon that posits that individuals with a mutual connection are more likely to be connected to one another -- can be one such process. In doing so, we challenge a long-held belief that triadic closure and homophily work in tandem. By analyzing several fundamental network models using popular integration measures, we demonstrate the desegregating potential of triadic closure. We further empirically investigate this effect on real-world dynamic networks, surfacing observations that mirror our theoretical findings. We leverage these insights to discuss simple interventions that can help reduce segregation in settings that exhibit an interplay between triadic closure and homophily. We conclude with a discussion on qualitative implications for the design of interventions in settings where individuals arrive in an online fashion, and the designer can influence the initial set of connections.
Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worst-case …
Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worst-case competitive analysis, of particular importance in the design and analysis of simple (posted-price) incentive compatible mechanisms with provable approximation guarantees. A central open problem in this area concerns subadditive combinatorial auctions. Here $n$ agents with subadditive valuation functions compete for the assignment of $m$ items. The goal is to find an allocation of the items that maximizes the total value of the assignment. The question is whether there exists a prophet inequality for this problem that significantly beats the best known approximation factor of $O(\log m)$. We make major progress on this question by providing an $O(\log \log m)$ prophet inequality. Our proof goes through a novel primal-dual approach. It is also constructive, resulting in an online policy that takes the form of static and anonymous item prices that can be computed in polynomial time given appropriate query access to the valuations. As an application of our approach, we construct a simple and incentive compatible mechanism based on posted prices that achieves an $O(\log \log m)$ approximation to the optimal revenue for subadditive valuations under an item-independence assumption.
Cloud computing customers often submit repeating jobs and computation pipelines on approximately regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine …
Cloud computing customers often submit repeating jobs and computation pipelines on approximately regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine learning, allows customers to partially predict future job requirements. We develop a model of cloud computing platforms that receive statements of work (SoWs) in an online fashion. The SoWs describe future jobs whose arrival times and durations are probabilistic, and whose utility to the submitting agents declines with completion time. The arrival and duration distributions, as well as the utility functions, are considered private customer information and are reported by strategic agents to a scheduler that is optimizing for social welfare.
Cloud computing customers often submit repeating jobs and computation pipelines on \emph{approximately} regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine …
Cloud computing customers often submit repeating jobs and computation pipelines on \emph{approximately} regular schedules, with arrival and running times that exhibit variance. This pattern, typical of training tasks in machine learning, allows customers to partially predict future job requirements. We develop a model of cloud computing platforms that receive statements of work (SoWs) in an online fashion. The SoWs describe future jobs whose arrival times and durations are probabilistic, and whose utility to the submitting agents declines with completion time. The arrival and duration distributions, as well as the utility functions, are considered private customer information and are reported by strategic agents to a scheduler that is optimizing for social welfare. We design pricing, scheduling, and eviction mechanisms that incentivize truthful reporting of SoWs. An important challenge is maintaining incentives despite the possibility of the platform becoming saturated. We introduce a framework to reduce scheduling under uncertainty to a relaxed scheduling problem without uncertainty. Using this framework, we tackle both adversarial and stochastic submissions of statements of work, and obtain logarithmic and constant competitive mechanisms, respectively.
We study the aggregate welfare and individual regret guarantees of dynamic \emph{pacing algorithms} in the context of repeated auctions with budgets. Such algorithms are commonly used as bidding agents in …
We study the aggregate welfare and individual regret guarantees of dynamic \emph{pacing algorithms} in the context of repeated auctions with budgets. Such algorithms are commonly used as bidding agents in Internet advertising platforms. We show that when agents simultaneously apply a natural form of gradient-based pacing, the liquid welfare obtained over the course of the learning dynamics is at least half the optimal expected liquid welfare obtainable by any allocation rule. Crucially, this result holds \emph{without requiring convergence of the dynamics}, allowing us to circumvent known complexity-theoretic obstacles of finding equilibria. This result is also robust to the correlation structure between agent valuations and holds for any \emph{core auction}, a broad class of auctions that includes first-price, second-price, and generalized second-price auctions. For individual guarantees, we further show such pacing algorithms enjoy \emph{dynamic regret} bounds for individual value maximization, with respect to the sequence of budget-pacing bids, for any auction satisfying a monotone bang-for-buck property.
We study a communication game between a sender and receiver where the sender has access to a set of informative signals about a state of the world. The sender chooses …
We study a communication game between a sender and receiver where the sender has access to a set of informative signals about a state of the world. The sender chooses one of her signals, called an ``anecdote'' and communicates it to the receiver. The receiver takes an action, yielding a utility for both players. Sender and receiver both care about the state of the world but are also influenced by a personal preference so that their ideal actions differ. We characterize perfect Bayesian equilibria when the sender cannot commit to a particular communication scheme. In this setting the sender faces ``persuasion temptation'': she is tempted to select a more biased anecdote to influence the receiver's action. Anecdotes are still informative to the receiver but persuasion comes at the cost of precision. This gives rise to ``informational homophily'' where the receiver prefers to listen to like-minded senders because they provide higher-precision signals. In particular, we show that a sender with access to many anecdotes will essentially send the minimum or maximum anecdote even though with high probability she has access to an anecdote close to the state of the world that would almost perfectly reveal it to the receiver. In contrast to the classic Crawford-Sobel model, full revelation is a knife-edge equilibrium and even small differences in personal preferences will induce highly polarized communication and a loss in utility for any equilibrium. We show that for fat-tailed anecdote distributions the receiver might even prefer to talk to poorly informed senders with aligned preferences rather than a knowledgeable expert whose preferences may differ from her own. We also show that under commitment differences in personal preferences no longer affect communication and the sender will generally report the most representative anecdote closest to the posterior mean for common distributions.
We develop a model of content filtering as a game between the filter and the content consumer, where the latter incurs information costs for examining the content. Motivating examples include …
We develop a model of content filtering as a game between the filter and the content consumer, where the latter incurs information costs for examining the content. Motivating examples include censoring misinformation, spam/phish filtering, and recommender systems. When the attacker is exogenous, we show that improving the filter's quality is weakly Pareto improving, but has no impact on equilibrium payoffs until the filter becomes sufficiently accurate. Further, if the filter does not internalize the information costs, its lack of commitment power may render it useless and lead to inefficient outcomes. When the attacker is also strategic, improvements to filter quality may sometimes decrease equilibrium payoffs.
Standard ad auction formats do not immediately extend to settings where multiple size configurations and layouts are available to advertisers. In these settings, the sale of web advertising space increasingly …
Standard ad auction formats do not immediately extend to settings where multiple size configurations and layouts are available to advertisers. In these settings, the sale of web advertising space increasingly resembles a combinatorial auction with complementarities, where truthful auctions such as the Vickrey–Clarke–Groves (VCG) auction can yield unacceptably low revenue. In “Fast Core Pricing for Rich Advertising Auctions,” Niazadeh, Hartline, Immorlica, Khani, and Lucier study and suggest core-selecting auctions, which boost revenue by setting payments so that no group of agents, including the auctioneer, can jointly improve their utilities by switching to a different outcome. Their main result is a combinatorial algorithm that finds an approximate bidder-optimal core point with an almost linear number of calls to the welfare-maximization oracle. This algorithm is faster than previously proposed heuristics in the literature and has theoretical guarantees. By accompanying the theoretical study with an experimental study based on Microsoft Bing Ad Auction data, the authors conclude that core pricing is implementable even for very time-sensitive practical use cases such as real-time online advertising and can yield more revenue than the VCG or generalized second price auction.
We study the design of a decentralized two-sided matching market in which agents' search is guided by the platform. Each agent is of one of finitely many types and has …
We study the design of a decentralized two-sided matching market in which agents' search is guided by the platform. Each agent is of one of finitely many types and has (potentially random) preferences drawn from known type-specific distributions. Equipped with such distributional knowledge, the platform guides the search process by determining the meeting rate between each pair of types from the two sides. Meanwhile, agents strategically accept or reject the potential partners whom they meet. Focusing on when agents have symmetric pairwise preferences in a continuum model, we first characterize the unique stationary equilibrium that arises given a feasible set of meeting rates. We then introduce the platform's optimal directed search problem, which involves optimizing meeting rates to maximize equilibrium social welfare. We show that incentive issues arising from congestion and cannibalization make the design problem fairly intricate. Nonetheless, we develop an efficiently computable solution whose corresponding equilibrium achieves at least 1/4 of the optimal social welfare. Our directed search design is simple and easy-to-implement, as its corresponding bipartite graph consists of disjoint stars. Furthermore, our solution implies that, with careful search design, the platform can substantially limit choice and yet induce an equilibrium with approximately optimal welfare. Finally, we show that approximation is likely the best we can hope for by establishing that the problem of designing optimal directed search is NP-hard to approximate beyond a certain constant factor.
A common assumption in auction theory is that the information available to the agents is given exogenously and that the auctioneer has full control over the market. In practice, agents …
A common assumption in auction theory is that the information available to the agents is given exogenously and that the auctioneer has full control over the market. In practice, agents might be able to acquire information about their competitors before the auction (by exerting some costly effort), and might be able to resell acquired items in an aftermarket. The auctioneer has no control over those aspects, yet their existence influences agents' strategic behavior and the overall equilibrium welfare can strictly decrease as a result.
We show that if an auction is smooth (e.g., first-price auction, all-pay auction), then the corresponding price of anarchy bound due to smoothness continues to hold in any environment with (a) information acquisition on opponents' valuations, and/or (b) an aftermarket satisfying two mild conditions (voluntary participation and weak budget balance). We also consider the special case with two ex ante symmetric bidders, where the first-price auction is known to be efficient in isolation. We show that information acquisition can lead to efficiency loss in this environment, but aftermarkets do not: any equilibrium of a first-price or all-pay auction combined with an aftermarket is still efficient.
We study mechanisms for selling a single item when buyers have private values for their outside options, which they forego by participating in the mechanism. This substantially changes the revenue …
We study mechanisms for selling a single item when buyers have private values for their outside options, which they forego by participating in the mechanism. This substantially changes the revenue maximization problem. For example, the seller can strictly benefit from selling lotteries already in the single-buyer setting. We bound the menu size and the sample complexity for the optimal single-buyer mechanism. We then show that posting a single price is in fact optimal under the assumption that either (1) the outside option value is independent of the item value, and the item value distribution has decreasing marginal revenue or monotone hazard rate; or (2) the outside option value is a concave function of the item value. Moreover, when there are multiple buyers, we show that sequential posted pricing guarantees a large fraction of the optimal revenue under similar conditions.
Assortment optimization refers to the problem of designing a slate of products to offer potential customers, such as stocking the shelves in a convenience store. The price of each product …
Assortment optimization refers to the problem of designing a slate of products to offer potential customers, such as stocking the shelves in a convenience store. The price of each product is fixed in advance, and a probabilistic choice function describes which product a customer will choose from any given subset. We introduce the combinatorial assortment problem, where each customer may select a bundle of products. We consider a choice model in which each consumer selects a utility-maximizing bundle subject to a private valuation function, and study the complexity of the resulting optimization problem. Our main result is an exact algorithm for additive k -demand valuations, under a model of vertical differentiation in which customers agree on the relative value of each pair of items but differ in their absolute willingness to pay. For valuations that are vertically differentiated but not necessarily additive k -demand, we show how to obtain constant approximations under a “well-priced” condition, where each product’s price is sufficiently high. We further show that even for a single customer with known valuation, any sub-polynomial approximation to the problem requires exponentially many demand queries when the valuation function is XOS and that no FPTAS exists even when the valuation is succinctly representable.
We consider a model where an agent has a repeated decision to make and wishes to maximize their total payoff. Payoffs are influenced by an action taken by the agent, …
We consider a model where an agent has a repeated decision to make and wishes to maximize their total payoff. Payoffs are influenced by an action taken by the agent, but also an unknown state of the world that evolves over time. Before choosing an action each round, the agent can purchase noisy samples about the state of the world. The agent has a budget to spend on these samples, and has flexibility in deciding how to spread that budget across rounds. We investigate the problem of choosing a sampling algorithm that optimizes total expected payoff. For example: is it better to buy samples steadily over time, or to buy samples in batches? We solve for the optimal policy, and show that it is a natural instantiation of the latter. Under a more general model that includes per-round fixed costs, we prove that a variation on this batching policy is a 2-approximation.
We study the design of a decentralized two-sided matching market in which agents' search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences …
We study the design of a decentralized two-sided matching market in which agents' search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences drawn from known type-specific distributions. Equipped with knowledge of these distributions, the platform guides the search process by determining the meeting rate between each pair of types from the two sides. Focusing on symmetric pairwise preferences in a continuum model, we first characterize the unique stationary equilibrium that arises given a feasible set of meeting rates. We then introduce the platform's optimal directed search problem, which involves optimizing meeting rates to maximize equilibrium social welfare. We first show that incentive issues arising from congestion and cannibalization make the design problem fairly intricate. Nonetheless, we develop an efficiently computable search design whose corresponding equilibrium achieves at least 1/4 the social welfare of the optimal design. In fact, our construction always recovers at least 1/4 the first-best social welfare, where agents' incentives are disregarded. Our directed search design is simple and easy-to-implement, as its corresponding bipartite graph consists of disjoint stars. Furthermore, our design implies the platform can substantially limit choice and yet induce an equilibrium with an approximately optimal welfare. Finally, we show that approximation is likely the best we can hope for by establishing that the problem of designing optimal directed search is NP-hard to even approximate beyond a certain constant factor.
We study the design of a decentralized two-sided matching market in which agents’ search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences …
We study the design of a decentralized two-sided matching market in which agents’ search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences drawn from known type-specific distributions. Equipped with knowledge of these distributions, the platform guides the search process by determining the meeting rate between each pair of types from the two sides. Focusing on symmetric pairwise preferences in a continuum model, we first characterize the unique stationary equilibrium that arises given a feasible set of meeting rates. We then introduce the platform’s optimal directed search problem, which involves optimizing meeting rates to maximize equilibrium social welfare. We first show that incentive issues arising from congestion and cannibalization make the design problem fairly intricate. Nonetheless, we develop an efficiently computable search design whose corresponding equilibrium achieves at least 1/4 the social welfare of the optimal design. In fact, our construction always recovers at least 1/4 the first-best social welfare, where agents’ incentives are disregarded. Our directed search design is simple and easy-to-implement, as its corresponding bipartite graph consists of disjoint stars. Furthermore, our design implies the platform can substantially limit choice and yet induce an equilibrium with an approximately optimal welfare. Finally, we show that approximation is likely the best we can hope for by establishing that the problem of designing optimal directed search is NP-hard to even approximate beyond a certain constant factor.
We study mechanisms for selling a single item when buyers have private costs for participating in the mechanism. An agent's participation cost can also be interpreted as an outside option …
We study mechanisms for selling a single item when buyers have private costs for participating in the mechanism. An agent's participation cost can also be interpreted as an outside option value that she must forego to participate. This substantially changes the revenue maximization problem, which becomes non-convex in the presence of participation costs. For multiple buyers, we show how to construct a $(2+\epsilon)$-approximately revenue-optimal mechanism in polynomial time. Our approach makes use of a many-buyers-to-single-buyer reduction, and in the single-buyer case our mechanism improves to an FPTAS. We also bound the menu size and the sample complexity for the optimal single-buyer mechanism. Moreover, we show that posting a single price in the single-buyer case is in fact optimal under the assumption that either (1) the participation cost is independent of the value, and the value distribution has decreasing marginal revenue or monotone hazard rate; or (2) the participation cost is a concave function of the value. When there are multiple buyers, we show that sequential posted pricing guarantees a large fraction of the optimal revenue under similar conditions.
A prevalent assumption in auction theory is that the auctioneer has full control over the market and that the allocation she dictates is final. In practice, however, agents might be …
A prevalent assumption in auction theory is that the auctioneer has full control over the market and that the allocation she dictates is final. In practice, however, agents might be able to resell acquired items in an aftermarket. A prominent example is the market for carbon emission allowances. These allowances are commonly allocated by the government using uniform-price auctions, and firms can typically trade these allowances among themselves in an aftermarket that may not be fully under the auctioneer's control. While the uniform-price auction is approximately efficient in isolation, we show that speculation and resale in aftermarkets might result in a significant welfare loss. Motivated by this issue, we consider three approaches, each ensuring high equilibrium welfare in the combined market. The first approach is to adopt smooth auctions such as discriminatory auctions. This approach is robust to correlated valuations and to participants acquiring information about others' types. However, discriminatory auctions have several downsides, notably that of charging bidders different prices for identical items, resulting in fairness concerns that make the format unpopular. Two other approaches we suggest are either using posted-pricing mechanisms, or using uniform-price auctions with anonymous reserves. We show that when using balanced prices, both these approaches ensure high equilibrium welfare in the combined market. The latter also inherits many of the benefits from uniform-price auctions such as price discovery, and can be introduced with a minor modification to auctions currently in use to sell carbon emission allowances.
We survey the main results from [Dütting, Kesselheim, and Lucier 2020]: 1 a simple posted-price mechanism for subadditive combinatorial auctions with m items that achieves an O (log log m …
We survey the main results from [Dütting, Kesselheim, and Lucier 2020]: 1 a simple posted-price mechanism for subadditive combinatorial auctions with m items that achieves an O (log log m ) approximation to the optimal welfare, plus a variant with entry fees that approximates revenue. These are based on a novel subadditive prophet inequality.
Motivated by applications such as college admission and insurance rate determination, we propose an evaluation problem where the inputs are controlled by strategic individuals who can modify their features at …
Motivated by applications such as college admission and insurance rate determination, we propose an evaluation problem where the inputs are controlled by strategic individuals who can modify their features at a cost. A learner can only partially observe the features, and aims to classify individuals with respect to a quality score. The goal is to design an evaluation mechanism that maximizes the overall quality score, i.e., welfare, in the population, taking any strategic updating into account. We further study the algorithmic aspect of finding the welfare maximizing evaluation mechanism under two specific settings in our model. When scores are linear and mechanisms use linear scoring rules on the observable features, we show that the optimal evaluation mechanism is an appropriate projection of the quality score. When mechanisms must use linear thresholds, we design a polynomial time algorithm with a (1/4)-approximation guarantee when the underlying feature distribution is sufficiently smooth and admits an oracle for finding dense regions. We extend our results to settings where the prior distribution is unknown and must be learned from samples.
Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worst-case …
Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worst-case competitive analysis, of particular importance in the design and analysis of simple (posted-price) incentive compatible mechanisms with provable approximation guarantees. A central open problem in this area concerns subadditive combinatorial auctions. Here n agents with subadditive valuation functions compete for the assignment of m items. The goal is to find an allocation of the items that maximizes the total value of the assignment. The question is whether there exists a prophet inequality for this problem that significantly beats the best known approximation factor of O(log m). We make major progress on this question by providing an O(log log m) prophet inequality. Our proof goes through a novel primal-dual approach. It is also constructive, resulting in an online policy that takes the form of static and anonymous item prices that can be computed in polynomial time given appropriate query access to the valuations. As an application of our approach, we construct a simple and incentive compatible mechanism based on posted prices that achieves an O(log log m) approximation to the optimal revenue for subadditive valuations under an item-independence assumption.
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the …
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating n goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in n ) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents—where polytime BIC reductions are known—we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell n items to a single unit-demand buyer. In this robust version, the seller …
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell n items to a single unit-demand buyer. In this robust version, the seller knows the buyer's marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case [Carroll, 2017], we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than n1/2-ε. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
We present a general framework for designing approximately revenue-optimal mechanisms for multi-item additive auctions. Our approach adapts the duality framework of Cai, Devanur and Weinberg (STOC 2016) and applies to …
We present a general framework for designing approximately revenue-optimal mechanisms for multi-item additive auctions. Our approach adapts the duality framework of Cai, Devanur and Weinberg (STOC 2016) and applies to both truthful and non-truthful auctions. Given a (not necessarily truthful) single-item auction format 'A' satisfying certain technical conditions, we run simultaneous item auctions augmented with a personalized entry fee for each bidder that must be paid before the auction can be accessed. These entry fees depend only on the prior distribution of bidder types, and in particular are independent of realized bids. We bound the revenue of the resulting two-part tariff mechanism using a novel geometric technique that enables revenue guarantees for many common non-truthful auctions that previously had none. Our framework can be used with many common auction formats, such as simultaneous first-price, simultaneous second-price, and simultaneous all-pay auctions. Our results for first price and all-pay are the first revenue guarantees of non-truthful mechanisms in multi-dimensional environments, addressing an open question in the literature. If all-pay auctions are used, we prove that the resulting mechanism is also credible in the sense that the auctioneer cannot benefit by deviating from the stated mechanism after observing agent bids. This is the first static credible mechanism for multi-item additive auctions that achieves a constant factor of the optimal revenue. If second-price auctions are used, we obtain a truthful O(1)-approximate mechanism with fixed entry fees that are amenable to tuning via online learning techniques.
Motivated by applications such as college admission and insurance rate determination, we study a classification problem where the inputs are controlled by strategic individuals who can modify their features at …
Motivated by applications such as college admission and insurance rate determination, we study a classification problem where the inputs are controlled by strategic individuals who can modify their features at a cost. A learner can only partially observe the features, and aims to classify individuals with respect to a quality score. The goal is to design a classification mechanism that maximizes the overall quality score in the population, taking any strategic updating into account. When scores are linear and mechanisms can assign their own scores to agents, we show that the optimal classifier is an appropriate projection of the quality score. For the more restrictive task of binary classification via linear thresholds, we construct a (1/4)-approximation to the optimal classifier when the underlying feature distribution is sufficiently smooth and admits an oracle for finding dense regions. We extend our results to settings where the prior distribution is unknown and must be learned from samples.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value …
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
We study equilibria in two-buyer sequential second-price (or first-price) auctions for identical goods. Buyers have weakly decreasing incremental values, and we make a behavioural no-overbidding assumption: the buyers do not …
We study equilibria in two-buyer sequential second-price (or first-price) auctions for identical goods. Buyers have weakly decreasing incremental values, and we make a behavioural no-overbidding assumption: the buyers do not bid above their incremental values. Structurally, we show equilibria are intrinsically linked to a greedy bidding strategy. We then prove three results. First, any equilibrium consists of three phases: a competitive phase, a competition reduction phase and a monopsony phase. In particular, there is a time after which one buyer exhibits monopsonistic behaviours. Second, the declining price anomaly holds: prices weakly decrease over time at any equilibrium in the no-overbidding game, a fact previously known for equilibria with overbidding. Third, the price of anarchy of the sequential auction is exactly $1 - 1/e$.
In many practical applications, heuristic or approximation algorithms are used to efficiently solve the task at hand. However their solutions frequently do not satisfy natural monotonicity properties of optimal solutions. …
In many practical applications, heuristic or approximation algorithms are used to efficiently solve the task at hand. However their solutions frequently do not satisfy natural monotonicity properties of optimal solutions. In this work we develop algorithms that are able to restore monotonicity in the parameters of interest. Specifically, given oracle access to a (possibly non-monotone) multi-dimensional real-valued function $f$, we provide an algorithm that restores monotonicity while degrading the expected value of the function by at most $\varepsilon$. The number of queries required is at most logarithmic in $1/\varepsilon$ and exponential in the number of parameters. We also give a lower bound showing that this exponential dependence is necessary. Finally, we obtain improved query complexity bounds for restoring the weaker property of $k$-marginal monotonicity. Under this property, every $k$-dimensional projection of the function $f$ is required to be monotone. The query complexity we obtain only scales exponentially with $k$.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller …
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller knows the buyer's marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case (Carroll, 2017), we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than $n^{1/2-\epsilon}$. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
We identify the first static credible mechanism for multi-item additive auctions that achieves a constant factor of the optimal revenue. This is one instance of a more general framework for …
We identify the first static credible mechanism for multi-item additive auctions that achieves a constant factor of the optimal revenue. This is one instance of a more general framework for designing two-part tariff auctions, adapting the duality framework of Cai et al [CDW16]. Given a (not necessarily incentive compatible) auction format $A$ satisfying certain technical conditions, our framework augments the auction with a personalized entry fee for each bidder, which must be paid before the auction can be accessed. These entry fees depend only on the prior distribution of bidder types, and in particular are independent of realized bids. Our framework can be used with many common auction formats, such as simultaneous first-price, simultaneous second-price, and simultaneous all-pay auctions. If all-pay auctions are used, we prove that the resulting mechanism is credible in the sense that the auctioneer cannot benefit by deviating from the stated mechanism after observing agent bids. If second-price auctions are used, we obtain a truthful $O(1)$-approximate mechanism with fixed entry fees that are amenable to tuning via online learning techniques. Our results for first price and all-pay are the first revenue guarantees of non-truthful mechanisms in multi-dimensional environments; an open question in the literature [RST17].
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold …
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms and is used to derive new and improved results for combinatorial markets (with and without complements), multidimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related …
We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related to smooth games defined by Roughgarden, that can be thought of as mechanisms that generate approximately market clearing prices. We show that smooth mechanisms result in high quality outcome both in equilibrium and in learning outcomes in the full information setting, as well as in Bayesian equilibrium with uncertainty about participants. Our main result is to show that smooth mechanisms compose well: smoothness locally at each mechanism implies global efficiency.
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. …
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. We show that if their full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy. We show how some proofs from the literature (item bidding auctions, greedy auctions) can be cast as smoothness proofs or be simplified using smoothness. For first price item bidding with fractionally subadditive bidders we actually manage to improve by much the existing result \cite{Hassidim2011a} from 4 to $\frac{e}{e-1}\approx 1.58$. This also shows a very interesting separation between first and second price item bidding since second price item bidding has PoA at least 2 even under complete information. For a larger class of Bayesian Games where the strategy space of a player also changes with his type we are able to show that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound. We show how weighted congestion games actually satisfy this stronger definition of smoothness. This allows us to show that the inefficiency bounds of weighted congestion games known in the literature carry over to incomplete versions where the weights of the players are private information. We also show how an incomplete version of a natural class of monotone valid utility games, called effort market games are universally $(1,1)$-smooth. Hence, we show that incomplete versions of effort market games where the abilities of the players and their budgets are private information has Bayes-Nash PoA at most 2.
Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. …
Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. We study the efficiency of Bayes-Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. While it is known that the social welfare of every pure Nash equilibrium (NE) constitutes a constant fraction of the optimal social welfare, a pure NE rarely exists, and moreover, the full information assumption is often unrealistic. Therefore, quantifying the welfare loss in Bayes-Nash equilibria is of particular interest. Previous work established a logarithmic bound on the ratio between the social welfare of a BNE and the expected optimal social welfare in both first-price auctions (Hassidim et al., 2011) and second-price auctions (Bhawalkar and Roughgarden, 2011), leaving a large gap between a constant and a logarithmic ratio. We introduce a new proof technique and use it to resolve both of these gaps in a unified way. Specifically, we show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. …
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent …
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined …
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, …
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand …
We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand constraints, appropriately extending Myerson's single-dimensional result [21] to this setting. We also show that every feasible Bayesian auction - including in particular the revenue-optimal one - can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type ti is transformed into a virtual type fi(ti), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black-box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. Our results are computationally efficient for all multidimensional settings where the bidders are additive, or can be efficiently mapped to be additive, albeit the feasibility and demand constraints may still remain arbitrary combinatorial. In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The runtime can be further improved to polynomial in only the number of items and the number of bidders in itemsymmetric settings by making use of techniques from [15].
In many natural settings agents participate in multiple different auctions that are not simultaneous. In such auctions, future opportunities affect strategic considerations of the players. The goal of this paper …
In many natural settings agents participate in multiple different auctions that are not simultaneous. In such auctions, future opportunities affect strategic considerations of the players. The goal of this paper is to develop a quantitative understanding of outcomes of such sequential auctions. In earlier work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in sequential auctions. We considered sequential first price auctions in the full information model, where players are aware of all future opportunities, as well as the valuation of all players. In this paper, we study efficiency in sequential auctions in the Bayesian environment, relaxing the informational assumption on the players. We focus on two environments, both studied in the full information model in Paes Leme et al. 2012, matching markets and matroid auctions. In the full information environment, a sequential first price cut auction for matroid settings is efficient. In Bayesian environments this is no longer the case, as we show using a simple example with three players. Our main result is a bound of 3 on the price of anarchy in both matroid auctions and matching markets. To bound the price of anarchy we need to consider possible deviations at an equilibrium. In a sequential Bayesian environment the effect of deviations is more complex than in one-shot games; early bids allow others to infer information about the player's value. We create effective deviations despite the presence of this difficulty by introducing a bluffing technique of independent interest.
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by …
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds …
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [14] shows that randomness offers no benefit - the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al.[6] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call the common base value model) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn …
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is …
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be …
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints (i.e., that the total allocation of each item should not exceed the supply). Our framework is general in the sense that it makes no direct assumption about agents' valuations, type distributions, or single agent constraints (e.g., budget, incentive compatibility, etc.). We present two generic multiagent mechanisms which use single agent mechanisms as black boxes. If an $\alpha$-approximate single agent mechanism is available for each agent, and assuming no agent ever demands more than $\frac{1}{k}$ of all units of each item, our generic multiagent mechanisms are $\gamma_{k}\alpha$-approximations of the optimal multiagent mechanism, where $\gamma_{k}$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. As a byproduct of our construction, we present a generalization of prophet inequalities where both gambler and prophet are allowed to pick $k$ numbers each to receive a reward equal to their sum. Finally, we use our framework to obtain multiagent mechanisms with improved approximation factor for several settings from the literature.
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- …
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our frame- work can be applied to any setting which roughly satisfies the following assumptions: (i) the buyer's types must be distributed independently (not necessarily identically), (ii) the objective function must be linearly separable over the set of buyers, and (iii) the supply constraints must be the only constraints involving more than one buyer. Our framework is general in the sense that it makes no explicit assumption about any of the following: (i) the buyer's valuations (e.g., submodular, additive, etc), (ii) The distribution of types for each buyer, and (iii) the other constraints involving individual buyers (e.g., budget constraints, etc). We present two generic ra-buyer mechanisms that use 1- buyer mechanisms as black boxes. Assuming that we have an α-approximate 1-buyer mechanism for each buyer and assuming that no buyer ever needs more than 1/k of all copies of each item for some integer k ≥ 1, then our generic n- buyer mechanisms are γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> · α-approximation of the optimal n-buyer mechanism, in which γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is a constant which is at least 1 - 1/√(k+3). Observe that γ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> is at least1/2 (for k = 1) and approaches 1 as k increases. As a byproduct of our construction, we improve a generalization of prophet inequalities. Furthermore, as applications of our main theorem, we improve several results from the literature.
We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1/ 2 -ε must use exponentially many value queries, where …
We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1/ 2 -ε must use exponentially many value queries, where m is the number of items. In contrast, ignoring incentives there exist constant ratio approximation algorithms for this problem. Our approach is based on a novel direct hardness technique that completely skips the notoriously hard step of characterizing truthful mechanisms. The characterization step was the main obstacle for proving impossibility results in algorithmic mechanism design so far. We demonstrate two additional applications of our new technique: (1) an impossibility result for universally-truthful polynomial time flexible combinatorial public projects and (2) an impossibility result for truthful-in-expectation mechanisms for exact combinatorial public projects. The latter is the first result that bounds the power of polynomial-time truthful in expectation mechanisms in any setting.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn …
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D . We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k -demand, additive up to a matroid constraint, or additive up to constraints of any downward-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers. In the second part of the article, we develop a connection between approximately optimal simple mechanisms and approximate revenue monotonicity with respect to buyers’ valuations. Revenue non-monotonicity is the phenomenon that sometimes strictly increasing buyers’ values for every set can strictly decrease the revenue of the optimal mechanism. Using our main result, we derive a bound on how bad this degradation can be (and dub such a bound a proof of approximate revenue monotonicity); we further show that better bounds on approximate monotonicity imply a better analysis of our simple mechanisms.
In many settings agents participate in multiple different auctions that are not necessarily implemented simultaneously. Future opportunities affect strategic considerations of the players in each auction, introducing externalities. Motivated by …
In many settings agents participate in multiple different auctions that are not necessarily implemented simultaneously. Future opportunities affect strategic considerations of the players in each auction, introducing externalities. Motivated by this consideration, we study a setting of a market of buyers and sellers, where each seller holds one item, bidders have combinatorial valuations and sellers hold item auctions sequentially.Our results are qualitatively different from those of simultaneous auctions, proving that simultaneity is a crucial aspect of previous work. We prove that if sellers hold sequential first price auctions then for unit-demand bidders (matching market) every subgame perfect equilibrium achieves at least half of the optimal social welfare, while for submodular bidders or when second price auctions are used, the social welfare can be arbitrarily worse than the optimal. We also show that a first price sequential auction for buying or selling a base of a matroid is always efficient, and implements the VCG outcome.An important tool in our analysis is studying first and second price auctions with externalities (bidders have valuations for each possible winner outcome), which can be of independent interest. We show that a Pure Nash Equilibrium always exists in a first price auction with externalities.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a …
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine …
Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both …
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both problems. This is close to the Omega(n/log n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log(n) log(r))-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(loglog(n) log(r)) factor) an open question posed to us by Bobby Kleinberg.
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 …
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any …
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) buyers' types must be distributed independently (not necessarily identically), (ii) objective function must be linearly separable over the buyers, and (iii) except for the supply constraints, there should be no other inter-buyer constraints. Our framework is general in the sense that it makes no explicit assumption about buyers' valuations, type distributions, and single buyer constraints (e.g., budget, incentive compatibility, etc).
We present two generic multi buyer mechanisms which use single buyer mechanisms as black boxes; if an $\alpha$-approximate single buyer mechanism can be constructed for each buyer, and if no buyer requires more than $\frac{1}{k}$ of all units of each item, then our generic multi buyer mechanisms are $\gamma_k\alpha$-approximation of the optimal multi buyer mechanism, where $\gamma_k$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. Observe that $\gamma_k$ is at least 1/2 (for $k=1$) and approaches 1 as $k \to \infty$. As a byproduct of our construction, we present a generalization of prophet inequalities. Furthermore, as applications of our framework, we present multi buyer mechanisms with improved approximation factor for several settings from the literature.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have …
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
The optimal allocation of resources in complex environments—like allocation of dynamic wireless spectrum, cloud computing services, and Internet advertising—is computationally challenging even given the true preferences of the participants. In …
The optimal allocation of resources in complex environments—like allocation of dynamic wireless spectrum, cloud computing services, and Internet advertising—is computationally challenging even given the true preferences of the participants. In the theory and practice of optimization in complex environments, a wide variety of special and general purpose algorithms have been developed; these algorithms produce outcomes that are satisfactory but not generally optimal or incentive compatible. This paper develops a very simple approach for converting any, potentially non-optimal, algorithm for optimization given the true participant preferences, into a Bayesian incentive compatible mechanism that weakly improves social welfare and revenue. (JEL D82, H82, L82)
We present an incentive-compatible polynomial-time approximation scheme for multi-unit auctions with general k-minded playervaluations. The mechanism fully optimizes over an appropriately chosen sub-range of possible allocations and then uses VCG …
We present an incentive-compatible polynomial-time approximation scheme for multi-unit auctions with general k-minded playervaluations. The mechanism fully optimizes over an appropriately chosen sub-range of possible allocations and then uses VCG payments over this sub-range. We show that obtaining a fully polynomial-time incentive-compatible approximation scheme, at least using VCG payments, is NP-hard. For the case of valuations given by black boxes, we give a polynomial-time incentive-compatible 2-approximation mechanism and show that no better is possible, at least using VCG payments.
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold …
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms and is used to derive new and improved results for combinatorial markets (with and without complements), multidimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over …
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
We study dynamic matching in an infinite-horizon stochastic market.While all agents are potentially compatible with each other, some are hard-to-match and others are easy-to-match.Agents prefer to be matched as soon …
We study dynamic matching in an infinite-horizon stochastic market.While all agents are potentially compatible with each other, some are hard-to-match and others are easy-to-match.Agents prefer to be matched as soon as possible and matches are formed either bilaterally or indirectly through chains.We adopt an asymptotic approach and compute tight bounds on the limit of waiting time of agents under myopic policies that differ in matching technology and prioritization.We find that the market composition is a key factor in the desired matching technology and prioritization level.When hard-to-match agents arrive less frequently than easy-to-match ones (i) bilateral matching is almost as efficient as chains (waiting times scale similarly under both, though chains always outperform bilateral matching by a constant factor), and (ii) assigning priorities to hard-to-match agents improves their waiting times.When hard-to-match agents arrive more frequently, chains are much more efficient than bilateral matching and prioritization has no impact.We further conduct comparative statics on arrival rates.Somewhat surprisingly, we find that in a heterogeneous market and under bilateral matching, increasing arrival rate has a nonmonotone effect on waiting times, due to the fact that, under some market compositions, there is an adverse effect of competition.Our comparative statics shed light on the impact of merging markets and attracting altruistic agents (that initiate chains) or easy-to-match agents.This work uncovers fundamental differences between heterogeneous and homogeneous dynamic markets, and potentially helps policy makers to generate insights on the operations of matching markets such as kidney exchange programs.
We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints.The classic solution concept for …
We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints.The classic solution concept for such problems is Walrasian Equilibrium (WE), which provides a simple and transparent pricing structure that achieves optimal social welfare. The main weakness of the WE notion is that it exists only in very restrictive cases. To overcome this limitation, we introduce the notion of a Combinatorial Walrasian equilibium (CWE), a natural relaxation of WE. The difference between a CWE and a (non-combinatorial) WE is that the seller can package the items into indivisible bundles prior to sale, and the market does not necessarily clear.
We show that every valuation profile admits a CWE that obtains at least half of the optimal (unconstrained) social welfare. Moreover, we devise a poly-time algorithm that, given an arbitrary allocation X, computes a CWE that achieves at least half of the welfare of X. Thus, the economic problem of finding a CWE with high social welfare reduces to the algorithmic problem of social-welfare approximation. In addition, we show that every valuation profile admits a CWE that extracts a logarithmic fraction of the optimal welfare as revenue. Finally, these results are complemented by strong lower bounds when the seller is restricted to using item prices only, which motivates the use of bundles. The strength of our results derives partly from their generality - our results hold for arbitrary valuations that may exhibit complex combinations of substitutes and complements.
We present pricing mechanisms for several online resource allocation problems which obtain tight or nearly tight approximations to social welfare. In our settings, buyers arrive online and purchase bundles of …
We present pricing mechanisms for several online resource allocation problems which obtain tight or nearly tight approximations to social welfare. In our settings, buyers arrive online and purchase bundles of items; buyers' values for the bundles are drawn from known distributions. This problem is closely related to the so-called prophet-inequality of Krengel and Sucheston [23] and its extensions in recent literature. Motivated by applications to cloud economics, we consider two kinds of buyer preferences. In the first, items correspond to different units of time at which a resource is available; the items are arranged in a total order and buyers desire intervals of items. The second corresponds to bandwidth allocation over a tree network; the items are edges in the network and buyers desire paths.Because buyers' preferences have complementarities in the settings we consider, recent constant-factor approximations via item prices do not apply, and indeed strong negative results are known. We develop static, anonymous bundle pricing mechanisms.For the interval preferences setting, we show that static, anonymous bundle pricings achieve a sublogarithmic competitive ratio, which is optimal (within constant factors) over the class of all online allocation algorithms, truthful or not. For the path preferences setting, we obtain a nearly-tight logarithmic competitive ratio. Both of these results exhibit an exponential improvement over item pricings for these settings. Our results extend to settings where the seller has multiple copies of each item, with the competitive ratio decreasing linearly with supply. Such a gradual tradeoff between supply and the competitive ratio for welfare was previously known only for the single item prophet inequality.
Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their …
Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e ≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, …
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, the learner deploys a classifier, then an adversarially chosen agent arrives and possibly manipulates her features to optimally respond to the learner's choice of classifier. The learner has no knowledge of the agents' utility functions or "real" features, which may vary widely across agents. Instead, the learner is only able to observe their "revealed preferences", i.e., the manipulated feature vectors they provide. For a broad family of agent cost functions, we give a computationally efficient learning algorithm that is able to obtain diminishing "Stackelberg regret" --- a form of policy regret that guarantees that the learner is realizing loss nearly as small as that of the best classifier in hindsight, even allowing for the fact that agents would have best-responded differently to the optimal classifier.
Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter …
Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction. We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.
We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the …
We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the assumption that the buyers' values are additive subject to a matroid feasibility constraint and independent across items. Importantly, different buyers in our setting can have different constraints on the sets of items they desire. Our mechanism is a sequential variant of two-part tariffs. Prior to our work, simple approximation mechanisms for such multi-buyer problems were known only for the special cases of all unit-demand or all additive value buyers.
Complements between goods--where one good takes on added value in the presence of another--have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are …
Complements between goods--where one good takes on added value in the presence of another--have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are common in the standard motivating applications for combinatorial auctions, like spectrum license auctions. On the other, welfare maximization in the presence of complements is notoriously difficult, and this intractability has stymied theoretical progress in the area. For example, there are no known positive results for combinatorial auctions in which bidder valuations are multi-parameter and non-complement-free, other than the relatively weak results known for general valuations.
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to …
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over …
We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary …
We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone sub modular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness.
We introduce a new hierarchy over monotone set functions, that we refer to as MPH (Maximum over Positive Hyper-graphs). Levels of the hierarchy correspond to the degree of complementarity in …
We introduce a new hierarchy over monotone set functions, that we refer to as MPH (Maximum over Positive Hyper-graphs). Levels of the hierarchy correspond to the degree of complementarity in a given function. The highest level of the hierarchy, MPH-m (where m is the total number of items) captures all monotone functions. The lowest level, MPH-1, captures all monotone submodular functions, and more generally, the class of functions known as XOS. Every monotone function that has a positive hypergraph representation of rank κ (in the sense defined by Abraham, Babaioff, Dughmi and Roughgarden [EC 2012]) is in MPH-κ. Every monotone function that has supermodular degree κ (in the sense defined by Feige and Izsak [ITCS 2013]) is in MPH-(κ+1). In both cases, the converse direction does not hold, even in an approximate sense. We present additional results that demonstrate the expressiveness power of MPH-κ.
One can obtain good approximation ratios for some natural optimization problems, provided that functions are required to lie in low levels of the MPH hierarchy. We present two such applications. One shows that the maximum welfare problem can be approximated within a ratio of κ + 1 if all players hold valuation functions in MPH-κ. The other is an upper bound of 2κ on the price of anarchy of simultaneous first price auctions.
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for …
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.
We present an analysis framework for bounding the price of anarchy (POA) in games that have many players, as in many of the games most pertinent to computer science applications. …
We present an analysis framework for bounding the price of anarchy (POA) in games that have many players, as in many of the games most pertinent to computer science applications. We use this framework to demonstrate that, in many of the models in which the POA has been studied, the POA in large games is much smaller than the worst-case bound. Our framework also differentiates between mechanisms with similar worst-case performance, such as simultaneous uniform-price auctions and greedy combinatorial auctions, thereby providing new insights about which mechanisms are likely to perform well in realistic settings.