Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (9)

Abstract We study dynamic matching in exchange markets with easy- and hard-to-match agents. A greedy policy, which attempts to match agents upon arrival, ignores the positive externality that waiting agents … Abstract We study dynamic matching in exchange markets with easy- and hard-to-match agents. A greedy policy, which attempts to match agents upon arrival, ignores the positive externality that waiting agents provide by facilitating future matchings. We prove that the trade-off between a “thicker” market and faster matching vanishes in large markets; the greedy policy leads to shorter waiting times and more agents matched than any other policy. We empirically confirm these findings in data from the National Kidney Registry. Greedy matching achieves as many transplants as commonly used policies (1.8$\%$ more than monthly batching) and shorter waiting times (16 days faster than monthly batching).
We study matching policies in a dynamic exchange market with random compatibility, in which some agents are easier to match than others. In steady state this asymmetry creates an endogenous … We study matching policies in a dynamic exchange market with random compatibility, in which some agents are easier to match than others. In steady state this asymmetry creates an endogenous imbalance: hard-to-match agents wait for partners, while easy-to-match agents can match almost immediately upon arrival and leave the market quickly.A greedy policy, which attempts to match agents upon arrival, does not account for the positive externality waiting agents generate as they make it easier to match agents that arrive in the future, and may result in more unmatched than other policies. While this trade-off between a "thicker market" and quick matching is present in small markets we show that it vanishes in large markets and the greedy policy nevertheless dominates any other dynamic matching policy. As arrival rate increases the greedy policy matches a (weakly) higher fraction of agents than any other policy and leads to (weakly) lower average waiting time than any other policy. We show that in a large market greedy matching strictly outperforms batching policies (e.g., weekly matching) and a patient policy, which attempts to match agents only as they are about to depart the market.We test our large market predictions with kidney exchange data from the National Kidney Registry (NKR). Numerical simulations show that, in line with our predictions, the greedy policy matches patient-donor pairs significantly faster (roughly 20-30 days) than other commonly used policies and at most 1% percent fewer pairs than the patient policy.
We study dynamic matching in exchange markets with easy- and hard-to-match agents. A greedy policy, which attempts to match agents upon arrival, ignores the positive externality that waiting agents generate … We study dynamic matching in exchange markets with easy- and hard-to-match agents. A greedy policy, which attempts to match agents upon arrival, ignores the positive externality that waiting agents generate by facilitating future matchings. We prove that this trade-off between a ``thicker'' market and faster matching vanishes in large markets; A greedy policy leads to shorter waiting times, and more agents matched than any other policy. We empirically confirm these findings in data from the National Kidney Registry. Greedy matching achieves as many transplants as commonly-used policies (1.6\% more than monthly-batching), and shorter patient waiting times.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a … We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility. We note that although the previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of 1-1/e (i.e. almost 0.63). Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. Interestingly, we also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1-1/e; thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker, CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility. We note that although the previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of 1-1/e (i.e. almost 0.63). Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. Interestingly, we also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1-1/e; thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.

Commonly Cited References

From social networks such as Facebook, the World Wide Web and the Internet, to the complex interactions between proteins in the cells of our bodies, we constantly face the challenge … From social networks such as Facebook, the World Wide Web and the Internet, to the complex interactions between proteins in the cells of our bodies, we constantly face the challenge of understanding the structure and development of networks. The theory of random graphs provides a framework for this understanding, and in this book the authors give a gentle introduction to the basic tools for understanding and applying the theory. Part I includes sufficient material, including exercises, for a one semester course at the advanced undergraduate or beginning graduate level. The reader is then well prepared for the more advanced topics in Parts II and III. A final part provides a quick introduction to the background material needed. All those interested in discrete mathematics, computer science or applied probability and their applications will find this an ideal introduction to the subject.
We study dynamic matching in an infinite-horizon stochastic networked market, in which some agents are a priori more difficult to match than others. Agents have compatibility-based preferences and can match … We study dynamic matching in an infinite-horizon stochastic networked market, in which some agents are a priori more difficult to match than others. Agents have compatibility-based preferences and can match either bilaterally, or indirectly through chains. We study the effect matching technologies and matching policies have on efficiency in markets with different compositions of hard and easy-to-match agents. First, we analyze myopic matching policies and identify a strong connection between market thickness and the efficiency driven by the matching technology. We show that when "hard-to-match" agents join the market more frequently than "easy-to-match" ones, moving from bilateral matchings to chains significantly increases efficiency. Otherwise, the difference between matching bilaterally or through a chain is negligible. Second, we show that the lack of thickness cannot be compensated by non-myopic matching policies implying that the only way to thicken the market fruitfully is by attracting more agents.
We consider the well-studied cake cutting problem in which the goal is to identify an envy-free allocation based on a minimal number of queries from the agents. The problem has … We consider the well-studied cake cutting problem in which the goal is to identify an envy-free allocation based on a minimal number of queries from the agents. The problem has attracted considerable attention within various branches of computer science, mathematics, and economics. Although, the elegant Selfridge-Conway envy-free protocol for three agents has been known since 1960, it has been a major open problem to obtain a bounded envy-free protocol for more than three agents. The problem has been termed the central open problem in cake cutting. We solve this problem by proposing a discrete and bounded envy-free protocol for four agents.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, … We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
"Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues." Technometrics, 42(4), pp. 438–439 "Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues." Technometrics, 42(4), pp. 438–439
Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets … Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is which valuation domains admit truthful budget feasible mechanisms with 'small' approximations (compared to the social optimum)? Singer [35] showed that additive and submodular functions have a constant approximation mechanism. Recently, Dobzinski, Papadimitriou, and Singer [20] gave an O(log2n) approximation mechanism for subadditive functions; further, they remarked that: A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive In this paper, we address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis, are two standard approaches from computer science and economics, respectively. - For the prior-free framework, we use a linear program (LP) that describes the fractional cover of the valuation function; the LP is also connected to the concept of approximate core in cooperative game theory. We provide a mechanism for subadditive functions whose approximation is O(I), via the worst case integrality gap I of this LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, as well as for valuations having a constant integrality gap. XOS valuations are an important class of functions and lie between the submodular and the subadditive classes of valuations. We further give another polynomial time O(log n/(log log n)) sub-logarithmic approximation mechanism for subadditive functions. Both of our mechanisms improve the best known approximation ratio O(log2 n). - For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
We introduce a simple model of dynamic matching in networked markets, where agents arrive and depart stochastically, and the composition of the trade network depends endogenously on the matching algorithm. … We introduce a simple model of dynamic matching in networked markets, where agents arrive and depart stochastically, and the composition of the trade network depends endogenously on the matching algorithm. Varying the timing properties of matching algorithms can substantially affect their performance, and this depends crucially on the information structure. More precisely, if the planner can identify agents who are about to depart, then waiting to thicken the market substantially reduces the fraction of unmatched agents. If the planner cannot identify such agents, then matching agents greedily is close-to-optimal. We specify conditions under which local algorithms that choose the right time to match agents, but do not exploit the global network structure, are close-to-optimal. Finally, we consider a setting where agents have private information about their departure times, and design a continuous-time dynamic mechanism to elicit this information.
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is … We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is the best she can guarantee to herself, if she would partition the items in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then is to find an allocation, so that each player is guaranteed her maximin share. Previous works have studied this problem mostly algorithmically, providing constant factor approximation algorithms. In this work we embark on a mechanism design approach and investigate the existence of truthful mechanisms. We propose three models regarding the information that the mechanism attempts to elicit from the players, based on the cardinal and ordinal representation of preferences. We establish positive and negative (impossibility) results for each model and highlight the limitations imposed by truthfulness on the approximability of the problem. Finally, we pay particular attention to the case of two players, which already leads to challenging questions.
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share, that is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focussed on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how … We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
The need for kidney exchange arises when a healthy person wishes to donate a kidney but is incompatible with her intended recipient. Two main factors determine compatibility of a donor … The need for kidney exchange arises when a healthy person wishes to donate a kidney but is incompatible with her intended recipient. Two main factors determine compatibility of a donor with a patient: blood-type compatibility and tissue-type compatibility. Two or more incompatible pairs can form a cyclic exchange so that each patient can receive a kidney from a compatible donor. In addition, an exchange can be initiated by a non-directed donor (an altruistic donor who does not designate a particular intended patient), and in this case, a chain of exchanges need not form a closed cycle.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, Nick Gravin, and Pinyan Lupp.685 - 699Chapter DOI:https://doi.org/10.1137/1.9781611973082.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group. Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
We study dynamic matching in an infinite-horizon stochastic market.While all agents are potentially compatible with each other, some are hard-to-match and others are easy-to-match.Agents prefer to be matched as soon … We study dynamic matching in an infinite-horizon stochastic market.While all agents are potentially compatible with each other, some are hard-to-match and others are easy-to-match.Agents prefer to be matched as soon as possible and matches are formed either bilaterally or indirectly through chains.We adopt an asymptotic approach and compute tight bounds on the limit of waiting time of agents under myopic policies that differ in matching technology and prioritization.We find that the market composition is a key factor in the desired matching technology and prioritization level.When hard-to-match agents arrive less frequently than easy-to-match ones (i) bilateral matching is almost as efficient as chains (waiting times scale similarly under both, though chains always outperform bilateral matching by a constant factor), and (ii) assigning priorities to hard-to-match agents improves their waiting times.When hard-to-match agents arrive more frequently, chains are much more efficient than bilateral matching and prioritization has no impact.We further conduct comparative statics on arrival rates.Somewhat surprisingly, we find that in a heterogeneous market and under bilateral matching, increasing arrival rate has a nonmonotone effect on waiting times, due to the fact that, under some market compositions, there is an adverse effect of competition.Our comparative statics shed light on the impact of merging markets and attracting altruistic agents (that initiate chains) or easy-to-match agents.This work uncovers fundamental differences between heterogeneous and homogeneous dynamic markets, and potentially helps policy makers to generate insights on the operations of matching markets such as kidney exchange programs.
Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such … Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SeqTGreedy, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how … We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We examine two-sided markets where players arrive stochastically over time and are drawn from a continuum of types. The cost of matching a client and provider varies, so a social … We examine two-sided markets where players arrive stochastically over time and are drawn from a continuum of types. The cost of matching a client and provider varies, so a social planner is faced with two contending objectives: a) to reduce players' waiting time before getting matched; and b) to form efficient pairs in order to reduce matching costs. We show that such markets are characterized by a quick or cheap dilemma: Under a large class of distributional assumptions, there is no `free lunch', i.e., there exists no clearing schedule that is simultaneously optimal along both objectives. We further identify a unique breaking point signifying a stark reduction in matching cost contrasted by an increase in waiting time. Generalizing this model, we identify two regimes: one, where no free lunch exists; the other, where a window of opportunity opens to achieve a free lunch. Remarkably, greedy scheduling is never optimal in this setting.
We consider a matching market where buyers and sellers arrive according to independent Poisson processes at the same rate and independently abandon the market if not matched after an exponential … We consider a matching market where buyers and sellers arrive according to independent Poisson processes at the same rate and independently abandon the market if not matched after an exponential amount of time with the same mean. In this centralized market, the utility for the system manager from matching any buyer and any seller is a general random variable. We consider a sequence of systems indexed by $n$ where the arrivals in the $n^{\mathrm{th}}$ system are sped up by a factor of $n$. We analyze two families of one-parameter policies: the population threshold policy immediately matches an arriving agent to its best available mate only if the number of mates in the system is above a threshold, and the utility threshold policy matches an arriving agent to its best available mate only if the corresponding utility is above a threshold. Using a fluid analysis of the two-dimensional Markov process of buyers and sellers, we show that when the matching utility distribution is light-tailed, the population threshold policy with threshold $\frac{n}{\ln n}$ is asymptotically optimal among all policies that make matches only at agent arrival epochs. In the heavy-tailed case, we characterize the optimal threshold level for both policies. We also study the utility threshold policy in an unbalanced matching market with heavy-tailed matching utilities and find that the buyers and sellers have the same asymptotically optimal utility threshold. We derive optimal thresholds when the matching utility distribution is exponential, uniform, Pareto, and correlated Pareto. We find that as the right tail of the matching utility distribution gets heavier, the threshold level of each policy (and hence market thickness) increases, as does the magnitude by which the utility threshold policy outperforms the population threshold policy.
Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of … Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is "which valuation domains admit truthful budget feasible mechanisms with `small' approximations (compared to the social optimum)?" Singer showed that additive and submodular functions have such constant approximations. Recently, Dobzinski, Papadimitriou, and Singer gave an O(log^2 n)-approximation mechanism for subadditive functions; they also remarked that: "A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive functions." We address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis. For the prior-free framework, we use an LP that describes the fractional cover of the valuation function; it is also connected to the concept of approximate core in cooperative game theory. We provide an O(I)-approximation mechanism for subadditive functions, via the worst case integrality gap I of LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, and for valuations with a constant I. XOS valuations are an important class of functions that lie between submodular and subadditive classes. We give another polynomial time O(log n/loglog n) sub-logarithmic approximation mechanism for subadditive valuations. For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how … We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.