Author Description

Login to generate an author description

Ask a Question About This Mathematician

We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1−1/e)-competitive algorithm in their seminal work [10].Our main result is an optimal (1−1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted AllocationsGagan Aggarwal, Gagan Goel, Chinmay Karande, and … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted AllocationsGagan Aggarwal, Gagan Goel, Chinmay Karande, and Aranyak MehtaGagan Aggarwal, Gagan Goel, Chinmay Karande, and Aranyak Mehtapp.1253 - 1264Chapter DOI:https://doi.org/10.1137/1.9781611973082.95PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1 − 1/e)-competitive algorithm in their seminal work [10]. Our main result is an optimal (1 − 1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying these properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multiunit auctions with decreasing marginal utilities in the presence of budget constraints.
Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in … Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedy-like algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn't help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + o where o is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertex-iterative algorithms, we show that no vertex iterative algorithm can have an approximation factor better than 0.75.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as hard budget, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as average budgets. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with constrained quasi-linear utilities, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on demand and advertisement systems.
Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing … Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing the entire supply in advance. These allocation and pricing decisions get complicated when buyers have some global constraints. In this work, we consider a multi-unit model where buyers have global {\em budget} constraints, and the supply arrives in an online manner. Our main contribution is to show that for this setting there is an individually-rational, incentive-compatible and Pareto-optimal auction that allocates these units and calculates prices on the fly, without knowledge of the total supply. We do so by showing that the Adaptive Clinching Auction satisfies a {\em supply-monotonicity} property. We also analyze and discuss, using examples, how the insights gained by the allocation and payment rule can be applied to design better ad allocation heuristics in practice. Finally, while our main technical result concerns multi-unit supply, we propose a formal model of online supply that captures scenarios beyond multi-unit supply and has applications to sponsored search. We conjecture that our results for multi-unit auctions can be extended to these more general models.
One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a competitive outcome … One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a competitive outcome would have generated a significant revenue. A competitive outcome is one for which it is impossible for the seller and a subset of buyers to 'block' the auction by defecting and negotiating an outcome with higher payoffs for themselves. This corresponds to the well-known concept of core in cooperative game theory.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility. We note that although the previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of 1-1/e (i.e. almost 0.63). Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. Interestingly, we also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1-1/e; thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.
Submodular functions are an important class of functions in combinatorial optimization which satisfy the natural properties of decreasing marginal costs. The study of these functions has led to strong structural … Submodular functions are an important class of functions in combinatorial optimization which satisfy the natural properties of decreasing marginal costs. The study of these functions has led to strong structural properties with applications in many areas. Recently, there has been significant interest in extending the theory of algorithms for optimizing combinatorial problems (such as network design problem of spanning tree) over submodular functions. Unfortunately, the lower bounds under the general class of submodular functions are known to be very high for many of the classical problems. In this paper, we introduce and study an important subclass of submodular functions, which we call discounted price functions. These functions are succinctly representable and generalize linear cost functions. In this paper we study the following fundamental combinatorial optimization problems: Edge Cover, Spanning Tree, Perfect Matching and Shortest Path, and obtain tight upper and lower bounds for these problems. The main technical contribution of this paper is designing novel adaptive greedy algorithms for the above problems. These algorithms greedily build the solution whist rectifying mistakes made in the previous steps.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
We compare the expected efficiency of revenue maximizing (or {\em optimal}) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling … We compare the expected efficiency of revenue maximizing (or {\em optimal}) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with k + log_{e/(e-1)} k + 1 bidders is at least as much as the efficiency of the efficiency maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Theta(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k+1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders. We extend our result for the case of selling t identical items and show that 2.2 log k + t Theta(log log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism. In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is pointwise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen … In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen as markets embrace this autobidding and cover a range of topics in this area, including bidding algorithms, equilibrium analysis and efficiency of common auction formats, and optimal auction design.
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as \emph{hard budget}, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as \emph{average budgets}. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with \emph{constrained quasi-linear utilities}, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on-demand and advertisement systems. Our design is based Ausubel's \emph{clinching framework}. Incentive compatibility and feasibility with respect to ability-to-pay constraints are direct consequences of the clinching framework. Pareto-optimality, on the other hand, is considerably more challenging, since the no-trade condition that characterizes it depends not only on whether agents have their budgets exhausted or not, but also on prices {at} which the goods are allocated. In order to get a handle on those prices, we introduce novel concepts of dropping prices and saturation. These concepts lead to our main structural result which is a characterization of the tight sets in the clinching auction outcome and its relation to dropping prices.
Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in … Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedy-like algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn't help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + epsilon where epsilon is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertex-iterative algorithms, we show that no vertex-iterative algorithm can have an approximation factor better than 0.75.
We study the following vertex-weighted online bipartite matching problem: $G(U, V, E)$ is a bipartite graph. The vertices in $U$ have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: $G(U, V, E)$ is a bipartite graph. The vertices in $U$ have weights and are known ahead of time, while the vertices in $V$ arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in $U$. When all the weights are equal, this reduces to the classic \emph{online bipartite matching} problem for which Karp, Vazirani and Vazirani gave an optimal $\left(1-\frac{1}{e}\right)$-competitive algorithm in their seminal work~\cite{KVV90}. Our main result is an optimal $\left(1-\frac{1}{e}\right)$-competitive randomized algorithm for general vertex weights. We use \emph{random perturbations} of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in~\cite{KVV90} in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of \emph{online budgeted allocations} \cite{MSVV05} in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of \cite{MSVV05, BJN07} which apply when the bids are much smaller than the budgets.
How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not … How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes. This paper studies this question, looking at both fairness and efficiency measures. We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings. But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion. Consequently, we seek approximate solutions. We give several truthful mechanisms which achieve proportional fairness in an approximate sense. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s. With regard to efficiency, prior work has shown a lower bound of 0.5 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case. We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case. Interestingly, the new mechanism builds on the notion of proportional fairness.
We study the problem of designing mechanisms to allocate a heterogeneous set of divisible goods among a set of agents in a fair manner. We consider the well known solution … We study the problem of designing mechanisms to allocate a heterogeneous set of divisible goods among a set of agents in a fair manner. We consider the well known solution concept of proportional fairness that has found applications in many real-world scenarios. Although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful manner. To overcome this, in this paper, we give mechanisms which are truthful and achieve proportional fairness in an approximate manner. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s.
Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for … Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for a given ad slot. In this paper we study designing a mechanism for the Combinatorial Auction with Identical Items (CAII) in which we are interested in selling $k$ identical items to a group of bidders each demanding a certain number of items between $1$ and $k$. CAII generalizes important online advertising scenarios such as image-text and video-pod auctions [GK14]. In image-text auction we want to fill an advertising slot on a publisher's web page with either $k$ text-ads or a single image-ad and in video-pod auction we want to fill an advertising break of $k$ seconds with video-ads of possibly different durations. Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity (RM). RM is a natural constraint which states that the revenue of a mechanism should not decrease if the number of participants increases or if a participant increases her bid. [GK14] showed that no deterministic RM mechanism can attain PoRM of less than $\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than $\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a mechanism with PoRM of $O(\ln^2(k))$ for CAII. In this paper, we seek to overcome the impossibility result of [GK14] for deterministic mechanisms by using the power of randomization. We show that by using randomization, one can attain a constant PoRM. In particular, we design a randomized RM mechanism with PoRM of $3$ for CAII.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for {\em polymatroidal} environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multi-unit auctions with decreasing marginal utilities in the presence of budget constraints.
Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in … Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedy-like algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn't help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + epsilon where epsilon is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertex-iterative algorithms, we show that no vertex-iterative algorithm can have an approximation factor better than 0.75.
How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not … How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes. This paper studies this question, looking at both fairness and efficiency measures. We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings. But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion. Consequently, we seek approximate solutions. We give several truthful mechanisms which achieve proportional fairness in an approximate sense. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s. With regard to efficiency, prior work has shown a lower bound of 0.5 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case. We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case. Interestingly, the new mechanism builds on the notion of proportional fairness.
We revisit the classic problem of fair division from a mechanism design perspective, using {\em Proportional Fairness} as a benchmark. In particular, we aim to allocate a collection of divisible … We revisit the classic problem of fair division from a mechanism design perspective, using {\em Proportional Fairness} as a benchmark. In particular, we aim to allocate a collection of divisible items to a set of agents while incentivizing the agents to be truthful in reporting their valuations. For the very large class of homogeneous valuations, we design a truthful mechanism that provides {\em every agent} with at least a $1/e\approx 0.368$ fraction of her Proportionally Fair valuation. To complement this result, we show that no truthful mechanism can guarantee more than a $0.5$ fraction, even for the restricted class of additive linear valuations. We also propose another mechanism for additive linear valuations that works really well when every item is highly demanded. To guarantee truthfulness, our mechanisms discard a carefully chosen fraction of the allocated resources; we conclude by uncovering interesting connections between our mechanisms and known mechanisms that use money instead.
Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for … Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for a given ad slot. In this paper we study designing a mechanism for the Combinatorial Auction with Identical Items (CAII) in which we are interested in selling $k$ identical items to a group of bidders each demanding a certain number of items between $1$ and $k$. CAII generalizes important online advertising scenarios such as image-text and video-pod auctions [GK14]. In image-text auction we want to fill an advertising slot on a publisher's web page with either $k$ text-ads or a single image-ad and in video-pod auction we want to fill an advertising break of $k$ seconds with video-ads of possibly different durations. Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity (RM). RM is a natural constraint which states that the revenue of a mechanism should not decrease if the number of participants increases or if a participant increases her bid. [GK14] showed that no deterministic RM mechanism can attain PoRM of less than $\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than $\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a mechanism with PoRM of $O(\ln^2(k))$ for CAII. In this paper, we seek to overcome the impossibility result of [GK14] for deterministic mechanisms by using the power of randomization. We show that by using randomization, one can attain a constant PoRM. In particular, we design a randomized RM mechanism with PoRM of $3$ for CAII.
Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing … Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing the entire supply in advance. These allocation and pricing decisions get complicated when buyers have some global constraints. In this work, we consider a multi-unit model where buyers have global {\em budget} constraints, and the supply arrives in an online manner. Our main contribution is to show that for this setting there is an individually-rational, incentive-compatible and Pareto-optimal auction that allocates these units and calculates prices on the fly, without knowledge of the total supply. We do so by showing that the Adaptive Clinching Auction satisfies a {\em supply-monotonicity} property. We also analyze and discuss, using examples, how the insights gained by the allocation and payment rule can be applied to design better ad allocation heuristics in practice. Finally, while our main technical result concerns multi-unit supply, we propose a formal model of online supply that captures scenarios beyond multi-unit supply and has applications to sponsored search. We conjecture that our results for multi-unit auctions can be extended to these more general models.
One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a {\em competitive} … One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a {\em competitive} outcome could have generated a significant revenue. A competitive outcome is one for which it is impossible for the seller and a subset of buyers to `block' the auction by defecting and negotiating an outcome with higher payoffs for themselves. This corresponds to the well-known concept of {\em core} in cooperative game theory. In particular, VCG revenue is known to be not competitive when the goods being sold have complementarities. A bottleneck here is an impossibility result showing that there is no auction that simultaneously achieves competitive prices (a core outcome) and incentive-compatibility. In this paper we try to overcome the above impossibility result by asking the following natural question: is it possible to design an incentive-compatible auction whose revenue is comparable (even if less) to a competitive outcome? Towards this, we define a notion of {\em core-competitive} auctions. We say that an incentive-compatible auction is $\alpha$-core-competitive if its revenue is at least $1/\alpha$ fraction of the minimum revenue of a core-outcome. We study the Text-and-Image setting. In this setting, there is an ad slot which can be filled with either a single image ad or $k$ text ads. We design an $O(\ln \ln k)$ core-competitive randomized auction and an $O(\sqrt{\ln(k)})$ competitive deterministic auction for the Text-and-Image setting. We also show that both factors are tight.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility. We note that although the previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of 1-1/e (i.e. almost 0.63). Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. Interestingly, we also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1-1/e; thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as \emph{hard budget}, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as \emph{average budgets}. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with \emph{constrained quasi-linear utilities}, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on-demand and advertisement systems. Our design is based Ausubel's \emph{clinching framework}. Incentive compatibility and feasibility with respect to ability-to-pay constraints are direct consequences of the clinching framework. Pareto-optimality, on the other hand, is considerably more challenging, since the no-trade condition that characterizes it depends not only on whether agents have their budgets exhausted or not, but also on prices {at} which the goods are allocated. In order to get a handle on those prices, we introduce novel concepts of dropping prices and saturation. These concepts lead to our main structural result which is a characterization of the tight sets in the clinching auction outcome and its relation to dropping prices.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for {\em polymatroidal} environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multi-unit auctions with decreasing marginal utilities in the presence of budget constraints.
In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen … In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen as markets embrace this autobidding and cover a range of topics in this area, including bidding algorithms, equilibrium analysis and efficiency of common auction formats, and optimal auction design.
In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen … In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen as markets embrace this autobidding and cover a range of topics in this area, including bidding algorithms, equilibrium analysis and efficiency of common auction formats, and optimal auction design.
In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen … In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen as markets embrace this autobidding and cover a range of topics in this area, including bidding algorithms, equilibrium analysis and efficiency of common auction formats, and optimal auction design.
Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for … Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for a given ad slot. In this paper we study designing a mechanism for the Combinatorial Auction with Identical Items (CAII) in which we are interested in selling $k$ identical items to a group of bidders each demanding a certain number of items between $1$ and $k$. CAII generalizes important online advertising scenarios such as image-text and video-pod auctions [GK14]. In image-text auction we want to fill an advertising slot on a publisher's web page with either $k$ text-ads or a single image-ad and in video-pod auction we want to fill an advertising break of $k$ seconds with video-ads of possibly different durations. Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity (RM). RM is a natural constraint which states that the revenue of a mechanism should not decrease if the number of participants increases or if a participant increases her bid. [GK14] showed that no deterministic RM mechanism can attain PoRM of less than $\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than $\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a mechanism with PoRM of $O(\ln^2(k))$ for CAII. In this paper, we seek to overcome the impossibility result of [GK14] for deterministic mechanisms by using the power of randomization. We show that by using randomization, one can attain a constant PoRM. In particular, we design a randomized RM mechanism with PoRM of $3$ for CAII.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying these properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multiunit auctions with decreasing marginal utilities in the presence of budget constraints.
One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a competitive outcome … One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a competitive outcome would have generated a significant revenue. A competitive outcome is one for which it is impossible for the seller and a subset of buyers to 'block' the auction by defecting and negotiating an outcome with higher payoffs for themselves. This corresponds to the well-known concept of core in cooperative game theory.
Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for … Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for a given ad slot. In this paper we study designing a mechanism for the Combinatorial Auction with Identical Items (CAII) in which we are interested in selling $k$ identical items to a group of bidders each demanding a certain number of items between $1$ and $k$. CAII generalizes important online advertising scenarios such as image-text and video-pod auctions [GK14]. In image-text auction we want to fill an advertising slot on a publisher's web page with either $k$ text-ads or a single image-ad and in video-pod auction we want to fill an advertising break of $k$ seconds with video-ads of possibly different durations. Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity (RM). RM is a natural constraint which states that the revenue of a mechanism should not decrease if the number of participants increases or if a participant increases her bid. [GK14] showed that no deterministic RM mechanism can attain PoRM of less than $\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than $\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a mechanism with PoRM of $O(\ln^2(k))$ for CAII. In this paper, we seek to overcome the impossibility result of [GK14] for deterministic mechanisms by using the power of randomization. We show that by using randomization, one can attain a constant PoRM. In particular, we design a randomized RM mechanism with PoRM of $3$ for CAII.
One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a {\em competitive} … One of the major drawbacks of the celebrated VCG auction is its low (or zero) revenue even when the agents have high value for the goods and a {\em competitive} outcome could have generated a significant revenue. A competitive outcome is one for which it is impossible for the seller and a subset of buyers to `block' the auction by defecting and negotiating an outcome with higher payoffs for themselves. This corresponds to the well-known concept of {\em core} in cooperative game theory. In particular, VCG revenue is known to be not competitive when the goods being sold have complementarities. A bottleneck here is an impossibility result showing that there is no auction that simultaneously achieves competitive prices (a core outcome) and incentive-compatibility. In this paper we try to overcome the above impossibility result by asking the following natural question: is it possible to design an incentive-compatible auction whose revenue is comparable (even if less) to a competitive outcome? Towards this, we define a notion of {\em core-competitive} auctions. We say that an incentive-compatible auction is $\alpha$-core-competitive if its revenue is at least $1/\alpha$ fraction of the minimum revenue of a core-outcome. We study the Text-and-Image setting. In this setting, there is an ad slot which can be filled with either a single image ad or $k$ text ads. We design an $O(\ln \ln k)$ core-competitive randomized auction and an $O(\sqrt{\ln(k)})$ competitive deterministic auction for the Text-and-Image setting. We also show that both factors are tight.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as hard budget, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as average budgets. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with constrained quasi-linear utilities, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on demand and advertisement systems.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility. We note that although the previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of 1-1/e (i.e. almost 0.63). Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. Interestingly, we also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1-1/e; thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as \emph{hard budget}, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as \emph{average budgets}. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with \emph{constrained quasi-linear utilities}, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on-demand and advertisement systems. Our design is based Ausubel's \emph{clinching framework}. Incentive compatibility and feasibility with respect to ability-to-pay constraints are direct consequences of the clinching framework. Pareto-optimality, on the other hand, is considerably more challenging, since the no-trade condition that characterizes it depends not only on whether agents have their budgets exhausted or not, but also on prices {at} which the goods are allocated. In order to get a handle on those prices, we introduce novel concepts of dropping prices and saturation. These concepts lead to our main structural result which is a characterization of the tight sets in the clinching auction outcome and its relation to dropping prices.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility. We note that although the previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of 1-1/e (i.e. almost 0.63). Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. Interestingly, we also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1-1/e; thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as \emph{hard budget}, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as \emph{average budgets}. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with \emph{constrained quasi-linear utilities}, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on-demand and advertisement systems. Our design is based Ausubel's \emph{clinching framework}. Incentive compatibility and feasibility with respect to ability-to-pay constraints are direct consequences of the clinching framework. Pareto-optimality, on the other hand, is considerably more challenging, since the no-trade condition that characterizes it depends not only on whether agents have their budgets exhausted or not, but also on prices {at} which the goods are allocated. In order to get a handle on those prices, we introduce novel concepts of dropping prices and saturation. These concepts lead to our main structural result which is a characterization of the tight sets in the clinching auction outcome and its relation to dropping prices.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in … Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedy-like algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn't help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + epsilon where epsilon is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertex-iterative algorithms, we show that no vertex-iterative algorithm can have an approximation factor better than 0.75.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing … Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing the entire supply in advance. These allocation and pricing decisions get complicated when buyers have some global constraints. In this work, we consider a multi-unit model where buyers have global {\em budget} constraints, and the supply arrives in an online manner. Our main contribution is to show that for this setting there is an individually-rational, incentive-compatible and Pareto-optimal auction that allocates these units and calculates prices on the fly, without knowledge of the total supply. We do so by showing that the Adaptive Clinching Auction satisfies a {\em supply-monotonicity} property. We also analyze and discuss, using examples, how the insights gained by the allocation and payment rule can be applied to design better ad allocation heuristics in practice. Finally, while our main technical result concerns multi-unit supply, we propose a formal model of online supply that captures scenarios beyond multi-unit supply and has applications to sponsored search. We conjecture that our results for multi-unit auctions can be extended to these more general models.
Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in … Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedy-like algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn't help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + epsilon where epsilon is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertex-iterative algorithms, we show that no vertex-iterative algorithm can have an approximation factor better than 0.75.
Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in … Motivated by an application in kidney exchange, we study the following query-commit problem: we are given the set of vertices of a non-bipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedy-like algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn't help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + o where o is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertex-iterative algorithms, we show that no vertex iterative algorithm can have an approximation factor better than 0.75.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots.
How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not … How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes. This paper studies this question, looking at both fairness and efficiency measures. We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings. But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion. Consequently, we seek approximate solutions. We give several truthful mechanisms which achieve proportional fairness in an approximate sense. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s. With regard to efficiency, prior work has shown a lower bound of 0.5 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case. We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case. Interestingly, the new mechanism builds on the notion of proportional fairness.
We study the problem of designing mechanisms to allocate a heterogeneous set of divisible goods among a set of agents in a fair manner. We consider the well known solution … We study the problem of designing mechanisms to allocate a heterogeneous set of divisible goods among a set of agents in a fair manner. We consider the well known solution concept of proportional fairness that has found applications in many real-world scenarios. Although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful manner. To overcome this, in this paper, we give mechanisms which are truthful and achieve proportional fairness in an approximate manner. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for {\em polymatroidal} environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multi-unit auctions with decreasing marginal utilities in the presence of budget constraints.
How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not … How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes. This paper studies this question, looking at both fairness and efficiency measures. We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings. But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion. Consequently, we seek approximate solutions. We give several truthful mechanisms which achieve proportional fairness in an approximate sense. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s. With regard to efficiency, prior work has shown a lower bound of 0.5 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case. We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case. Interestingly, the new mechanism builds on the notion of proportional fairness.
We revisit the classic problem of fair division from a mechanism design perspective, using {\em Proportional Fairness} as a benchmark. In particular, we aim to allocate a collection of divisible … We revisit the classic problem of fair division from a mechanism design perspective, using {\em Proportional Fairness} as a benchmark. In particular, we aim to allocate a collection of divisible items to a set of agents while incentivizing the agents to be truthful in reporting their valuations. For the very large class of homogeneous valuations, we design a truthful mechanism that provides {\em every agent} with at least a $1/e\approx 0.368$ fraction of her Proportionally Fair valuation. To complement this result, we show that no truthful mechanism can guarantee more than a $0.5$ fraction, even for the restricted class of additive linear valuations. We also propose another mechanism for additive linear valuations that works really well when every item is highly demanded. To guarantee truthfulness, our mechanisms discard a carefully chosen fraction of the allocated resources; we conclude by uncovering interesting connections between our mechanisms and known mechanisms that use money instead.
Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing … Auctions for perishable goods such as internet ad inventory need to make real-time allocation and pricing decisions as the supply of the good arrives in an online manner, without knowing the entire supply in advance. These allocation and pricing decisions get complicated when buyers have some global constraints. In this work, we consider a multi-unit model where buyers have global {\em budget} constraints, and the supply arrives in an online manner. Our main contribution is to show that for this setting there is an individually-rational, incentive-compatible and Pareto-optimal auction that allocates these units and calculates prices on the fly, without knowledge of the total supply. We do so by showing that the Adaptive Clinching Auction satisfies a {\em supply-monotonicity} property. We also analyze and discuss, using examples, how the insights gained by the allocation and payment rule can be applied to design better ad allocation heuristics in practice. Finally, while our main technical result concerns multi-unit supply, we propose a formal model of online supply that captures scenarios beyond multi-unit supply and has applications to sponsored search. We conjecture that our results for multi-unit auctions can be extended to these more general models.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for {\em polymatroidal} environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multi-unit auctions with decreasing marginal utilities in the presence of budget constraints.
We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1−1/e)-competitive algorithm in their seminal work [10].Our main result is an optimal (1−1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted AllocationsGagan Aggarwal, Gagan Goel, Chinmay Karande, and … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted AllocationsGagan Aggarwal, Gagan Goel, Chinmay Karande, and Aranyak MehtaGagan Aggarwal, Gagan Goel, Chinmay Karande, and Aranyak Mehtapp.1253 - 1264Chapter DOI:https://doi.org/10.1137/1.9781611973082.95PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1 − 1/e)-competitive algorithm in their seminal work [10]. Our main result is an optimal (1 − 1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
We study the following vertex-weighted online bipartite matching problem: $G(U, V, E)$ is a bipartite graph. The vertices in $U$ have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: $G(U, V, E)$ is a bipartite graph. The vertices in $U$ have weights and are known ahead of time, while the vertices in $V$ arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in $U$. When all the weights are equal, this reduces to the classic \emph{online bipartite matching} problem for which Karp, Vazirani and Vazirani gave an optimal $\left(1-\frac{1}{e}\right)$-competitive algorithm in their seminal work~\cite{KVV90}. Our main result is an optimal $\left(1-\frac{1}{e}\right)$-competitive randomized algorithm for general vertex weights. We use \emph{random perturbations} of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in~\cite{KVV90} in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of \emph{online budgeted allocations} \cite{MSVV05} in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of \cite{MSVV05, BJN07} which apply when the bids are much smaller than the budgets.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
Submodular functions are an important class of functions in combinatorial optimization which satisfy the natural properties of decreasing marginal costs. The study of these functions has led to strong structural … Submodular functions are an important class of functions in combinatorial optimization which satisfy the natural properties of decreasing marginal costs. The study of these functions has led to strong structural properties with applications in many areas. Recently, there has been significant interest in extending the theory of algorithms for optimizing combinatorial problems (such as network design problem of spanning tree) over submodular functions. Unfortunately, the lower bounds under the general class of submodular functions are known to be very high for many of the classical problems. In this paper, we introduce and study an important subclass of submodular functions, which we call discounted price functions. These functions are succinctly representable and generalize linear cost functions. In this paper we study the following fundamental combinatorial optimization problems: Edge Cover, Spanning Tree, Perfect Matching and Shortest Path, and obtain tight upper and lower bounds for these problems. The main technical contribution of this paper is designing novel adaptive greedy algorithms for the above problems. These algorithms greedily build the solution whist rectifying mistakes made in the previous steps.
We compare the expected efficiency of revenue maximizing (or {\em optimal}) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling … We compare the expected efficiency of revenue maximizing (or {\em optimal}) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with k + log_{e/(e-1)} k + 1 bidders is at least as much as the efficiency of the efficiency maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Theta(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k+1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders. We extend our result for the case of selling t identical items and show that 2.2 log k + t Theta(log log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism. In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is pointwise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility … We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility constraint on the probabilities of service of the agents. These environments include position auctions where slots with decreasing click-through rates are auctioned to advertisers. We generalize and characterize the envy-free benchmark from Hartline and Yan [2011] to settings with budgets and characterize the optimal envy-free outcomes for both welfare and revenue. We give prior-free mechanisms that approximate these benchmarks. A building block in our mechanism is a clinching auction for position auction environments. This auction is a generalization of the multi-unit clinching auction of Dobzinski et al. [2008] and a special case of the polyhedral clinching auction of Goel et al. [2012]. For welfare maximization, we show that this clinching auction is a good approximation to the envy-free optimal welfare for position auction environments. For profit maximization, we generalize the random sampling profit extraction auction from Fiat et al. [2002] for digital goods to give a 10.0-approximation to the envy-free optimal revenue in symmetric, downward-closed environments. Even without budgets this revenue maximization question is of interest and we obtain an improved approximation bound of 7.5 (from 30.4 by Ha and Hartline [2012]).
The theorem of R. Rado (12) to which I refer by the name ‘Rado's theorem for matroids’ gives necessary and sufficient conditions for a family of subsets of a finite … The theorem of R. Rado (12) to which I refer by the name ‘Rado's theorem for matroids’ gives necessary and sufficient conditions for a family of subsets of a finite set Y to have a transversal independent in a given matroid on Y . This theorem is of fundamental importance in both transversal theory and matroid theory (see, for example, (11)). In (3) J. Edmonds introduced and studied ‘polymatroids’ as a sort of continuous analogue of a matroid. I start this paper with a brief introduction to polymatroids, emphasizing the role of the ‘ground-set rank function’. The main result is an analogue for polymatroids of Rado's theorem for matroids, which I call not unnaturally ‘Rado's theorem for polymatroids’.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots.
We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent's utility for an outcome is given by his value for the outcome minus any payment … We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent's utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This discontinuity in the utility function presents a significant challenge in the design of good mechanisms, and classical mechanisms fail to work in settings with budgets. The goal of this paper is to develop general reductions from budget-constrained Bayesian MD to unconstrained Bayesian MD with small loss in performance. We consider this question in the context of the two most well-studied objectives in mechanism design---social welfare and revenue---and present constant factor approximations in a number of settings. Some of our results extend to settings where budgets are private and agents need to be incentivized to reveal them truthfully.
In a sponsored search auction the advertisement slots on a search result page are generally ordered by click-through rate. Bidders have a valuation, which is usually assumed to be linear … In a sponsored search auction the advertisement slots on a search result page are generally ordered by click-through rate. Bidders have a valuation, which is usually assumed to be linear in the click-through rate, a budget constraint, and receive at most one slot per search result page (round). We study multi-round sponsored search auctions, where the different rounds are linked through the budget constraints of the bidders and the valuation of a bidder for all rounds is the sum of the valuations for the individual rounds. All mechanisms published so far either study one-round sponsored search auctions or the setting where every round has only one slot and all slots have the same click-through rate, which is identical to a multi-item auction. This paper contains the following three results: (1) We give the first mechanism for the multi-round sponsored search problem where different slots have different click-through rates. Our mechanism is incentive compatible in expectation, individually rational in expectation, Pareto optimal in expectation, and also ex-post Pareto optimal for each realized outcome. (2) Additionally we study the combinatorial setting, where each bidder is only interested in a subset of the rounds. We give a deterministic, incentive compatible, individually rational, and Pareto optimal mechanism for the setting where all slots have the same click-through rate. (3) We present an impossibility result for auctions where bidders have diminishing marginal valuations. Specifically, we show that even for the multi-unit (one slot per round) setting there is no incentive compatible, individually rational, and Pareto optimal mechanism for private diminishing marginal valuations and public budgets.
We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility … We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility constraint on the probabilities of service of the agents. These environments include position auctions where slots with decreasing click-through rates are auctioned to advertisers. We generalize and characterize the envy-free benchmark from Hartline and Yan [2011] to settings with budgets and characterize the optimal envy-free outcomes for both welfare and revenue. We give prior-free mechanisms that approximate these benchmarks. A building block in our mechanism is a clinching auction for position auction environments. This auction is a generalization of the multi-unit clinching auction of Dobzinski et al. [2008] and a special case of the polyhedral clinching auction of Goel et al. [2012]. For welfare maximization, we show that this clinching auction is a good approximation to the envy-free optimal welfare for position auction environments. For profit maximization, we generalize the random sampling profit extraction auction from Fiat et al. [2002] for digital goods to give a 10.0-approximation to the envy-free optimal revenue in symmetric, downward-closed environments. Even without budgets this revenue maximization question is of interest and we obtain an improved approximation bound of 7.5 (from 30.4 by Ha and Hartline [2012]).
We consider a market with a set of unit demand buyers and a set of heterogeneous goods. We assume that the utility of each buyer i from a good j … We consider a market with a set of unit demand buyers and a set of heterogeneous goods. We assume that the utility of each buyer i from a good j can be any arbitrary but continuous and decreasing function of price of j and is not necessarily quasi-linear (note that we can model any smooth budget constraint in this form). We give a constructive proof for the existence of Walrasian equilibria and show that set of Walrasian equilibria form a complete lattice. Furthermore we show that the mechanism that uses the smallest Walrasian equilibrium for allocation/prices is incentive compatible. Note that since utilities are not quasi-linear in prices, VCG mechanisms do not work. Our main contribution is an inductive characterization of the prices/allocations of Walrasian equilibria. Our constructive proof for the existence is also based on the same induction and does not rely on any fixed point theorem. As such, it provides new insight to the structure of the equilibria. We reveal striking similarities between the prices at the lowest Walrasian equilibrium and payments in VCG mechanisms.
We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, … We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of 1- 1/e ¿ 0.632, a very familiar bound that holds for many online problems; further, the bound is tight in this case. In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio, but still, no algorithm is known that beats the 1 - 1/e bound. Our main result is a 0.67-approximation online algorithm for stochastic bipartite matching, breaking this 1 - ¿ barrier. Furthermore, we show that no online algorithm can produce a 1 - ¿ approximation for an arbitrarily small e for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-)matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario. This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application.
Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted … Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)
Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally … Constraints on agent's ability to pay play a major role in auction design for any setting where the magnitude of financial transactions is sufficiently large. Those constraints have been traditionally modeled in mechanism design as hard budget, i.e., mechanism is not allowed to charge agents more than a certain amount. Yet, real auction systems (such as Google AdWords) allow more sophisticated constraints on agents' ability to pay, such as average budgets. In this work, we investigate the design of Pareto optimal and incentive compatible auctions for agents with constrained quasi-linear utilities, which captures more realistic models of liquidity constraints that the agents may have. Our result applies to a very general class of allocation constraints known as polymatroidal environments, encompassing many settings of interest such as multi-unit auctions, matching markets, video-on demand and advertisement systems.
We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1−1/e)-competitive algorithm in their seminal work [10].Our main result is an optimal (1−1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
Abstract We consider the following randomized algorithm for finding a matching M in an arbitrary graph G = ( V, E ). Repeatedly, choose a random vertex u , then … Abstract We consider the following randomized algorithm for finding a matching M in an arbitrary graph G = ( V, E ). Repeatedly, choose a random vertex u , then a random neighbour v of u . Add edge { u, v } to M and delete vertices u, v from G along with any vertices that become isolated. Our main result is that there exists a positive constant ϵ such that the expected ratio of the size of the matching produced to the size of largest matching in G is at least 0.5 + ϵ. We obtain stronger results for sparse graphs and trees and consider extensions to hypergraphs.
We consider the classical mathematical economics problem of {\em Bayesian optimal mechanism design} where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn … We consider the classical mathematical economics problem of {\em Bayesian optimal mechanism design} where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single-parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [Myerson '81]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [Ausubel and Milgrom '06], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [Manelli and Vincent '07]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. These mechanisms are approximately optimal in single-dimensional settings, and avoid many of the properties that make optimal mechanisms impractical. Furthermore, these mechanisms generalize naturally to give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, Nick Gravin, and Pinyan Lupp.685 - 699Chapter DOI:https://doi.org/10.1137/1.9781611973082.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group. Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
In settings where players have a limited access to liquidity, represented in the form of budget constraints, efficiency maximization has proven to be a challenging goal. In particular, the social … In settings where players have a limited access to liquidity, represented in the form of budget constraints, efficiency maximization has proven to be a challenging goal. In particular, the social welfare cannot be approximated by a better factor then the number of players. Therefore, the literature has mainly resorted to Pareto-efficiency as a way to achieve efficiency in such settings. While successful in some important scenarios, in many settings it is known that either exactly one incentive-compatible auction that always outputs a Pareto-efficient solution, or that no truthful mechanism can always guarantee a Pareto-efficient outcome. Traditionally, impossibility results can be avoided by considering approximations. However, Pareto-efficiency is a binary property (is either satisfied or not), which does not allow for approximations. In this paper we propose a new notion of efficiency, called \emph{liquid welfare}. This is the maximum amount of revenue an omniscient seller would be able to extract from a certain instance. We explain the intuition behind this objective function and show that it can be 2-approximated by two different auctions. Moreover, we show that no truthful algorithm can guarantee an approximation factor better than 4/3 with respect to the liquid welfare, and provide a truthful auction that attains this bound in a special case. Importantly, the liquid welfare benchmark also overcomes impossibilities for some settings. While it is impossible to design Pareto-efficient auctions for multi-unit auctions where players have decreasing marginal values, we give a deterministic $O(\log n)$-approximation for the liquid welfare in this setting.
We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing … We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimum-makespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a function-value oracle. The approximation guarantees for most of our algorithms are of the order of sqrt(n/ln n). We show that this is the inherent difficulty of the problems by proving matching lower bounds. We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.
Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such … Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SeqTGreedy, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.
The Generalized Second Price (GSP) auction is the primary auction used for monetizing the use of the Internet. It is well-known that truthtelling is not a dominant strategy in this … The Generalized Second Price (GSP) auction is the primary auction used for monetizing the use of the Internet. It is well-known that truthtelling is not a dominant strategy in this auction and that inefficient equilibria can arise. In this paper we study the space of equilibria in GSP, and quantify the efficiency loss that can arise in equilibria under a wide range of sources of uncertainty, as well as in the full information setting. The traditional Bayesian game models uncertainty in the valuations (types) of the participants. The Generalized Second Price (GSP) auction gives rise to a further form of uncertainty: the selection of quality factors resulting in uncertainty about the behavior of the underlying ad allocation algorithm. The bounds we obtain apply to both forms of uncertainty, and are robust in the sense that they apply under various perturbations of the solution concept, extending to models with information asymmetries and bounded rationality in the form of learning strategies. We present a constant bound (2.927) on the factor of the efficiency loss (\emph{price of anarchy}) of the corresponding game for the Bayesian model of partial information about other participants and about ad quality factors. For the full information setting, we prove a surprisingly low upper bound of 1.282 on the price of anarchy over pure Nash equilibria, nearly matching a lower bound of 1.259 for the case of three advertisers. Further, we do not require that the system reaches equilibrium, and give similarly low bounds also on the quality degradation for any no-regret learning outcome. Our conclusion is that the number of advertisers in the auction has almost no impact on the price of anarchy, and that the efficiency of GSP is very robust with respect to the belief and rationality assumptions imposed on the participants.
We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, … We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, frugality [Archer&Tardos'02] provides a measure to evaluate the "cost of truthfulness", that is, the overpayment of a truthful mechanism relative to the "fair" payment. We propose a uniform scheme for designing frugal truthful mechanisms for general set systems. Our scheme is based on scaling the agents' bids using the eigenvector of a matrix that encodes the interdependencies between the agents. We demonstrate that the r-out-of-k-system mechanism and the \sqrt-mechanism for buying a path in a graph [Karlin et. al'05] can be viewed as instantiations of our scheme. We then apply our scheme to two other classes of set systems, namely, vertex cover systems and k-path systems, in which a customer needs to purchase k edge-disjoint source-sink paths. For both settings, we bound the frugality of our mechanism in terms of the largest eigenvalue of the respective interdependency matrix. We show that our mechanism is optimal for a large subclass of vertex cover systems satisfying a simple local sparsity condition. For k-path systems, while our mechanism is within a factor of k + 1 from optimal, we show that it is, in fact, optimal, when one uses a modified definition of frugality proposed in [Elkind et al.'07]. Our lower bound argument combines spectral techniques and Young's inequality, and is applicable to all set systems. As both r-out-of-k systems and single path systems can be viewed as special cases of k-path systems, our result improves the lower bounds of [Karlin et al.'05] and answers several open questions proposed in that paper.
A Novel Class of Robust and Fast Algorithms for Online Allocation Problems A central problem in operations research is allocating limited resources sequentially to maximize cumulative rewards. Applications abound and … A Novel Class of Robust and Fast Algorithms for Online Allocation Problems A central problem in operations research is allocating limited resources sequentially to maximize cumulative rewards. Applications abound and include network revenue management and internet advertising among many others. Existing data-driven algorithms are tailored for convex settings with either adversarial or stochastic inputs. Many modern applications of online allocations problems, however, are nonconvex. Furthermore, algorithms for adversarial inputs may be too conservative in practice, whereas algorithms for stochastic inputs can perform poorly when the model is misspecified. In the paper “The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems,” Balseiro, Lu, and Mirrokni present a novel class of algorithms for nonconvex online allocation problems that attain good performance simultaneously in stochastic and adversarial input models and also in various nonstationary settings. The resulting algorithms are simple, fast, and robust to noise and corruption in the observations, in contrast to existing methods from the literature.
Budget constraints are ubiquitous in online advertisement auctions. To manage these constraints and smooth out the expenditure across auctions, the bidders (or the platform on behalf of them) often employ … Budget constraints are ubiquitous in online advertisement auctions. To manage these constraints and smooth out the expenditure across auctions, the bidders (or the platform on behalf of them) often employ pacing: each bidder is assigned a multiplier between 0 and 1, and her bid on each item is multiplicatively scaled down by the multiplier. This naturally gives rise to a game in which each bidder strategically selects a multiplier. The appropriate notion of equilibrium in this game is the pacing equilibrium.
In classic auction theory, reserve prices are known to be effective for improving revenue for the auctioneer against quasi-linear utility maximizing bidders. The introduction of reserve prices, however, usually do … In classic auction theory, reserve prices are known to be effective for improving revenue for the auctioneer against quasi-linear utility maximizing bidders. The introduction of reserve prices, however, usually do not help improve total welfare of the auctioneer and the bidders. In this paper, we focus on value maximizing bidders with return on spend constraints -- a paradigm that has drawn considerable attention recently as more advertisers adopt auto-bidding algorithms in advertising platforms -- and show that the introduction of reserve prices has a novel impact on the market. Namely, by choosing reserve prices appropriately the auctioneer can improve not only the total revenue but also the total welfare. Our results also demonstrate that reserve prices are robust to bidder types, i.e., reserve prices work well for different bidder types, such as value maximizers and utility maximizers, without using bidder type information. We generalize these results for a variety of auction mechanisms such as VCG, GSP, and first-price auctions. Moreover, we show how to combine these results with additive boosts to improve the welfare of the outcomes of the auction further. Finally, we complement our theoretical observations with an empirical study confirming the effectiveness of these ideas using data from online advertising auctions.
Budgets play a significant role in ad markets that implement sequential auctions such as those hosted by internet companies. In “Multiplicative Pacing Equilibria in Auction Markets,” the authors look at … Budgets play a significant role in ad markets that implement sequential auctions such as those hosted by internet companies. In “Multiplicative Pacing Equilibria in Auction Markets,” the authors look at pacing in an ad marketplace using the lens of game theory. The goal is understanding how bids must be shaded to maximize advertiser welfare, at equilibrium. Motivated by the real-world auction mechanism, they construct a game where advertisers in the auctions choose a multiplicative factor not larger than 1 to possibly reduce their bids and best respond to the other advertisers. The article studies the theoretical properties of the game such as existence and uniqueness of equilibria, offers an exact algorithm to compute them, connects the game to well-known abstractions such as Fisher markets, and performs a computational study with real-world-inspired instances. The main insights are that the solutions to the studied game can be used to improve the outcomes achieved by a closer-to-reality dynamic pacing algorithm and that buyers do not have an incentive to misreport bids or budgets when there are enough participants in the auction.
Auto-bidding is an area of increasing importance in the domain of online advertising. We study the problem of designing auctions in an auto-bidding setting with the goal of maximizing welfare … Auto-bidding is an area of increasing importance in the domain of online advertising. We study the problem of designing auctions in an auto-bidding setting with the goal of maximizing welfare at system equilibrium. Previous results showed that the price of anarchy (PoA) under VCG is 2 and also that this is tight even with two bidders. This raises an interesting question as to whether VCG yields the best efficiency in this setting, or whether the PoA can be improved upon. We present a prior-free randomized auction in which the PoA is approx. 1.896 for the case of two bidders, proving that one can achieve an efficiency strictly better than that under VCG in this setting. We also provide a stark impossibility result for the problem in general as the number of bidders increases – we show that no (randomized) anonymous truthful auction can have a PoA strictly better than 2 asymptotically as the number of bidders per query increases. While it was shown in previous work that one can improve on the PoA of 2 if the auction is allowed to use the bidder's values for the queries in addition to the bidder's bids, we note that our randomized auction is prior-free and does not use such additional information; our impossibility result also applies to auctions without additional value information.
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by … The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of … Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is "which valuation domains admit truthful budget feasible mechanisms with `small' approximations (compared to the social optimum)?" Singer showed that additive and submodular functions have such constant approximations. Recently, Dobzinski, Papadimitriou, and Singer gave an O(log^2 n)-approximation mechanism for subadditive functions; they also remarked that: "A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive functions." We address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis. For the prior-free framework, we use an LP that describes the fractional cover of the valuation function; it is also connected to the concept of approximate core in cooperative game theory. We provide an O(I)-approximation mechanism for subadditive functions, via the worst case integrality gap I of LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, and for valuations with a constant I. XOS valuations are an important class of functions that lie between submodular and subadditive classes. We give another polynomial time O(log n/loglog n) sub-logarithmic approximation mechanism for subadditive valuations. For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
The internet advertising market is a multibillion dollar industry in which advertisers buy thousands of ad placements every day by repeatedly participating in auctions. An important and ubiquitous feature of … The internet advertising market is a multibillion dollar industry in which advertisers buy thousands of ad placements every day by repeatedly participating in auctions. An important and ubiquitous feature of these auctions is the presence of campaign budgets, which specify the maximum amount the advertisers are willing to pay over a specified time period. In this paper, we present a new model to study the equilibrium bidding strategies in standard auctions, a large class of auctions that includes first and second price auctions, for advertisers who satisfy budget constraints on average. Our model dispenses with the common yet unrealistic assumption that advertisers’ values are independent and instead assumes a contextual model in which advertisers determine their values using a common feature vector. We show the existence of a natural value pacing–based Bayes–Nash equilibrium under very mild assumptions. Furthermore, we prove a revenue equivalence showing that all standard auctions yield the same revenue even in the presence of budget constraints. Leveraging this equivalence, we prove price of anarchy bounds for liquid welfare and structural properties of pacing-based equilibria that hold for all standard auctions. In recent years, the internet advertising market has adopted first price auctions as the preferred paradigm for selling advertising slots. Our work, thus, takes an important step toward understanding the implications of the shift to first price auctions in internet advertising markets by studying how the choice of the selling mechanism impacts revenues, welfare, and advertisers’ bidding strategies. This paper was accepted by Itai Ashlagi, revenue management and market analytics. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2023.4719 .
In online advertising markets, budget-constrained advertisers acquire ad placements through repeated bidding in auctions on various platforms. We present a strategy for bidding optimally in a set of auctions that … In online advertising markets, budget-constrained advertisers acquire ad placements through repeated bidding in auctions on various platforms. We present a strategy for bidding optimally in a set of auctions that may or may not be incentive-compatible under the presence of budget constraints. Our strategy maximizes the expected total utility across auctions while satisfying the advertiser's budget constraints in expectation. Additionally, we investigate the online setting where the advertiser must submit bids across platforms while learning about other bidders' bids over time. Our algorithm has $O(T^{3/4})$ regret under the full-information setting. Finally, we demonstrate that our algorithms have superior cumulative regret on both synthetic and real-world datasets of ad placement auctions, compared to existing adaptive pacing algorithms.
Digital advertising constitutes one of the main revenue sources for online platforms. In recent years, some advertisers tend to adopt auto-bidding tools to facilitate advertising performance optimization, making the classical … Digital advertising constitutes one of the main revenue sources for online platforms. In recent years, some advertisers tend to adopt auto-bidding tools to facilitate advertising performance optimization, making the classical utility maximizer model in auction theory not fit well. Some recent studies proposed a new model, called value maximizer, for auto-bidding advertisers with return-on-investment (ROI) constraints. However, the model of either utility maximizer or value maximizer could only characterize partial advertisers in real-world advertising platforms. In a mixed environment where utility maximizers and value maximizers coexist, the truthful ad auction design would be challenging since bidders could manipulate both their values and affiliated classes, leading to a multi-parameter mechanism design problem. In this work, we address this issue by proposing a payment rule which combines the corresponding ones in classical VCG and GSP mechanisms in a novel way. Based on this payment rule, we propose a truthful auction mechanism with an approximation ratio of 2 on social welfare, which is close to the lower bound of at least 5/4 that we also prove. The designed auction mechanism is a generalization of VCG for utility maximizers and GSP for value maximizers.
We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's … We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's behalf, subject to advertiser constraints such as budgets. Crucially, these constraints are strategically chosen by the advertisers. The chosen constraints define an "inner" budget-pacing game for the autobidders. Advertiser payoffs in the constraint-choosing "metagame" are determined by the equilibrium reached by the autobidders. Advertiser preferences can be more general than what is implied by their constraints: we assume only that they have weakly decreasing marginal value for clicks and weakly increasing marginal disutility for spending money. Nevertheless, we show that at any pure Nash equilibrium of the metagame, the resulting allocation obtains at least half of the liquid welfare of any allocation and this bound is tight. We also obtain a 4-approximation for any mixed Nash equilibrium or Bayes-Nash equilibria. These results rely on the power to declare budgets: if advertisers can specify only a (linear) value per click or an ROI target but not a budget constraint, the approximation factor at equilibrium can be as bad as linear in the number of advertisers.
In “Optimal No-Regret Learning in Repeated First-Price Auctions,” Y. Han, W. Tsachy, and Z. Zhou study online learning in repeated first-price auctions where a bidder, only observing the winning bid … In “Optimal No-Regret Learning in Repeated First-Price Auctions,” Y. Han, W. Tsachy, and Z. Zhou study online learning in repeated first-price auctions where a bidder, only observing the winning bid at the end of each auction, learns to adaptively bid to maximize her cumulative payoff. To achieve this goal, the bidder faces censored feedback: If she wins the bid, then she is not able to observe the highest bid of the other bidders, which we assume is i.i.d. drawn from an unknown distribution. In this paper, they develop the first learning algorithm that achieves a near-optimal regret bound, by exploiting two structural properties of first-price auctions, that is, the specific feedback structure and payoff function.
In the query-commit problem we are given a graph where edges have distinct probabilities of existing. It is possible to query the edges of the graph, and if the queried … In the query-commit problem we are given a graph where edges have distinct probabilities of existing. It is possible to query the edges of the graph, and if the queried edge exists then its endpoints are irrevocably matched. The goal is to find a querying strategy which maximizes the expected size of the matching obtained. This stochastic matching setup is motivated by applications in kidney exchanges and online dating. In this paper we address the query-commit problem from both theoretical and experimental perspectives. First, we show that a simple class of edges can be queried without compromising the optimality of the strategy. This property is then used to obtain in polynomial time an optimal querying strategy when the input graph is sparse. Next we turn our attentions to the kidney exchange application, focusing on instances modeled over real data from existing exchange programs. We prove that, as the number of nodes grows, almost every instance admits a strategy which matches almost all nodes. This result supports the intuition that more exchanges are possible on a larger pool of patient/donors and gives theoretical justification for unifying the existing exchange programs. Finally, we evaluate experimentally different querying strategies over kidney exchange instances. We show that even very simple heuristics perform fairly well, being within 1.5% of an optimal clairvoyant strategy, that knows in advance the edges in the graph. In such a time-sensitive application, this result motivates the use of committing strategies.
7. A. Kurosh, Ringtheoretische Probleme die mit dem Burnsideschen Problem uber periodische Gruppen in Zussammenhang stehen, Bull. Acad. Sei. URSS, Ser. Math. vol. 5 (1941) pp. 233-240. 8. J. Levitzki, … 7. A. Kurosh, Ringtheoretische Probleme die mit dem Burnsideschen Problem uber periodische Gruppen in Zussammenhang stehen, Bull. Acad. Sei. URSS, Ser. Math. vol. 5 (1941) pp. 233-240. 8. J. Levitzki, On the radical of a general ring, Bull. Amer. Math. Soc. vol. 49 (1943) pp. 462^66. 9. -, On three problems concerning nil rings, Bull. Amer. Math. Soc. vol. 49 (1943) pp. 913-919. 10. -, On the structure of algebraic algebras and related rings, Trans. Amer. Math. Soc. vol. 74 (1953) pp. 384-409.
In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team … In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team and pay as little as possible. Examples of this setting include shortest-path auctions and vertex-cover auctions. Recently, Karlin, Kempe and Tamir introduced a new definition of for this problem. Informally, the frugality ratio is the of the total payment of a mechanism to a desired payment bound. The captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction. In this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its ratio. We show that the solution quality is with a constant factor of optimal and the is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties. Moreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio. Also, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule. We study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions. We use these new definitions in the proof of our main result for vertex-cover auctions via a boot-strapping technique, which may be of independent interest.
There is only one technique for prior-free optimal mechanism design that generalizes beyond the structurally benevolent setting of digital goods. This technique uses random sampling to estimate the distribution of … There is only one technique for prior-free optimal mechanism design that generalizes beyond the structurally benevolent setting of digital goods. This technique uses random sampling to estimate the distribution of agent values and then employs the Bayesian optimal mechanism for this estimated distribution on the remaining players. Though quite general, even for digital goods, this random sampling auction has a complicated analysis and is known to be suboptimal. To overcome these issues we generalize the consensus and profit extraction techniques from Goldberg and Hartline [2003] to structurally rich environments that include, for example, single-minded combinatorial auctions.
We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, How auctions, and cut auctions. For Vertex Cover auctions, the vertices … We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, How auctions, and cut auctions. For Vertex Cover auctions, the vertices are owned by selfish and rational agents, and the auctioneer wants to purchase a vertex cover from them. For k-flow auctions, the edges are owned by the agents, and the auctioneer wants to purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know their costs, and the auctioneer needs to select a feasible set and payments based on bids made by the agents. We present constant-competitive truthful mechanisms for all three set systems. That is, the maximum overpayment of the mechanism is within a constant factor of the maximum overpayment of any truthful mechanism, for every set system in the class. The mechanism for Vertex Cover is based on scaling each bid by a multiplier derived from the dominant eigenvector of a certain matrix. The mechanism for k-flows prunes the graph to be minimally (k + 1)-connected, and then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts contracts the graph until all s-t paths have length exactly 2, and then applies the Vertex Cover mechanism.
Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, … Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for non-monotone submodular functions. In particular, for any constant k, we present a (1/k+2+1/k+ε)-approximation for the submodular maximization problem under k matroid constraints, and a (1/5-ε)-approximation algorithm for this problem subject to k knapsack constraints (ε>0 is any constant). We improve the approximation guarantee of our algorithm to 1/k+1+{1/k-1}+ε for k≥2 partition matroid constraints. This idea also gives a ({1/k+ε)-approximation for maximizing a monotone submodular function subject to k≥2 partition matroids, which improves over the previously best known guarantee of 1/k+1.
We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand … We provide a reduction from revenue maximization to welfare maximization in multidimensional Bayesian auctions with arbitrary - possibly combinatorial - feasibility constraints and independent bidders with arbitrary - possibly combinatorial-demand constraints, appropriately extending Myerson's single-dimensional result [21] to this setting. We also show that every feasible Bayesian auction - including in particular the revenue-optimal one - can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type ti is transformed into a virtual type fi(ti), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black-box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. Our results are computationally efficient for all multidimensional settings where the bidders are additive, or can be efficiently mapped to be additive, albeit the feasibility and demand constraints may still remain arbitrary combinatorial. In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The runtime can be further improved to polynomial in only the number of items and the number of bidders in itemsymmetric settings by making use of techniques from [15].
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by … The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.