Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (21)

We provide the first analysis of clock auctions through the learning-augmented framework. Deferred-acceptance clock auctions are a compelling class of mechanisms satisfying a unique list of highly practical properties, including … We provide the first analysis of clock auctions through the learning-augmented framework. Deferred-acceptance clock auctions are a compelling class of mechanisms satisfying a unique list of highly practical properties, including obvious strategy-proofness, transparency, and unconditional winner privacy, making them particularly well-suited for real-world applications. However, early work that evaluated their performance from a worst-case analysis standpoint concluded that no deterministic clock auction can achieve much better than an $O(\log n)$ approximation of the optimal social welfare (where $n$ is the number of bidders participating in the auction), even in seemingly very simple settings. To overcome this overly pessimistic impossibility result, which heavily depends on the assumption that the designer has no information regarding the preferences of the participating bidders, we leverage the learning-augmented framework. This framework assumes that the designer is provided with some advice regarding what the optimal solution may be. This advice may be the product of machine-learning algorithms applied to historical data, so it can provide very useful guidance, but it can also be highly unreliable. Our main results are learning-augmented clock auctions that use this advice to achieve much stronger performance guarantees whenever the advice is accurate (known as consistency), while simultaneously maintaining worst-case guarantees even if this advice is arbitrarily inaccurate (known as robustness). Specifically, for the standard notion of consistency, we provide a clock auction that achieves the best of both worlds: $(1+\epsilon)$-consistency for any constant $\epsilon > 0$ and $O(\log n)$ robustness. We then also consider a much stronger notion of consistency and provide an auction that achieves the optimal trade-off between this notion of consistency and robustness.
We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, … We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, no truthful and deterministic mechanism can guarantee envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can achieve fairness notions that are unattainable by DSIC mechanisms in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. Notably, for the case of indivisible goods, we significantly strengthen the state-of-the-art negative result for efficient DSIC mechanisms, while also highlighting the limitations of BIC mechanisms, by showing that a very general class of welfare objectives is incompatible with Bayesian Incentive Compatibility. Combined, these results give a near-complete picture of the power and limitations of BIC and DSIC mechanisms for the problem of allocating indivisible goods.
In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to … In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to complement the traditional worst-case analysis approach in computer science, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions. The algorithms can use the predictions as a guide to inform their decisions, aiming to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are inaccurate (robustness). We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well, even when the predictions are not fully accurate. Funding: The work of E. Balkanski was supported in part by the National Science Foundation [Grants CCF-2210501 and IIS-2147361]. The work of V. Gkatzelis and X. Tan was supported in part by the National Science Foundation [Grant CCF-2210502] and [CAREER Award CCF-2047907].
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free … We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto-efficient. We, therefore, turn to approximation and use the (Pareto-efficient) maximum Nash welfare allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
A surge of recent work has focused on analyzing the performance of algorithms guided by predictions, aiming to enhance their worst-case performance guarantees with improved guarantees when the predictions are … A surge of recent work has focused on analyzing the performance of algorithms guided by predictions, aiming to enhance their worst-case performance guarantees with improved guarantees when the predictions are accurate. This "learning-augmented" framework was recently also extended to mechanism design settings involving strategic agents and we provide an overview of these results.
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free … We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto efficient. We, therefore, turn to approximation and use the Nash social welfare maximizing allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, … We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, there is no truthful and deterministic mechanism that guarantees envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise-constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can overcome the aforementioned barriers that DSIC mechanisms face in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. En route to our results on BIC mechanisms, we also strengthen the state of the art in terms of negative results for DSIC mechanisms.
In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate … In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate with minimum social cost, defined as the total distance of the chosen candidate from all voters. The challenge is that the algorithm receives only ordinal input from each voter, in the form of a ranked list of candidates in non-decreasing order of their distances from her, whereas the objective function is cardinal. The distortion of an algorithm is its worst-case approximation factor with respect to the optimal social cost. A series of papers culminated in a 3-distortion algorithm, which is tight with respect to all deterministic algorithms. Aiming to overcome the limitations of worst-case analysis, we revisit the metric distortion problem through the learning-augmented framework, where the algorithm is provided with some prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to evaluate the performance of the algorithm under a accurate prediction (known as consistency), while simultaneously providing worst-case guarantees even for arbitrarily inaccurate predictions (known as robustness). For our main result, we characterize the robustness-consistency Pareto frontier for the metric distortion problem. We first identify an inevitable trade-off between robustness and consistency. We then devise a family of learning-augmented algorithms that achieves any desired robustness-consistency pair on this Pareto frontier. Furthermore, we provide a more refined analysis of the distortion bounds as a function of the prediction error (with consistency and robustness being two extremes). Finally, we also prove distortion bounds that integrate the notion of $\alpha$-decisiveness, which quantifies the extent to which a voter prefers her favorite candidate relative to the rest.
Aiming to overcome some of the limitations of worst-case analysis, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a (possibly erroneous) machine-learned prediction that … Aiming to overcome some of the limitations of worst-case analysis, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a (possibly erroneous) machine-learned prediction that they can use as a guide. In this framework, the goal is to obtain improved guarantees when the prediction is correct, which is called \emph{consistency}, while simultaneously guaranteeing some worst-case bounds even when the prediction is arbitrarily wrong, which is called \emph{robustness}. The vast majority of the work on this framework has focused on a refined analysis of online algorithms augmented with predictions regarding the future input. A subsequent line of work has also successfully adapted this framework to mechanism design, where the prediction is regarding the private information of strategic agents. In this paper, we initiate the study of online mechanism design with predictions, which combines the challenges of online algorithms with predictions and mechanism design with predictions. We consider the well-studied problem of designing a revenue-maximizing auction to sell a single item to strategic bidders who arrive and depart over time, each with an unknown, private, value for the item. We study the learning-augmented version of this problem where the auction designer is given a prediction regarding the maximum value over all agents. Our main result is a strategyproof mechanism whose revenue guarantees are $\alpha$-consistent with respect to the highest value and $(1-\alpha^2)/4$-robust with respect to the second-highest value, for $\alpha \in [0,1]$. We show that this tradeoff is optimal within a broad and natural family of auctions, meaning that any $\alpha$-consistent mechanism in that family has robustness at most $(1-\alpha^2)/4$. Finally, we extend our mechanism to also achieve expected revenues proportional to the prediction quality.
In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to … In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design.
A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can … A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions).
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi Tanpp.2940 - 2963Chapter DOI:https://doi.org/10.1137/1.9781611977073.114PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer's budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the … In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an $n$-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, $n$ remains the best known approximation and very recent work by \citet{CKK21} has been able to prove an $\Omega(\sqrt{n})$ approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the ``learning-augmented framework'', whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of~\citet{NR99} using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is $6$-consistent and $2n$-robust. We thus achieve the ``best of both worlds'': an $O(1)$ consistency and an $O(n)$ robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any $1$-consistent deterministic strategyproof mechanism has unbounded robustness.
In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to … In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design. We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness guarantees. This provides the designer with a menu of mechanism options to choose from, depending on her confidence regarding the prediction accuracy. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate.
A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can … A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions). In this paper, we overcome some of the obstacles that the literature has imposed on decentralized mechanisms, by designing mechanisms that are enhanced with predictions regarding the missing information. Specifically, inspired by the big success of the literature on "algorithms with predictions", we design decentralized mechanisms with predictions and evaluate their price of anarchy as a function of the prediction error, focusing on two very well-studied classes of games: scheduling games and multicast network formation games.
A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness … A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is fair while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no fair algorithm that guarantees more than 93.3% of the optimal social welfare.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the … We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work.
A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness … A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is envy-free while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no envy-free algorithm that guarantees more than 93.3% of the optimal social welfare.

Commonly Cited References

A set of T indivisible goods has to be allocated to a set of n agents with additive utilities, in a way that is fair and efficient. A standard fairness … A set of T indivisible goods has to be allocated to a set of n agents with additive utilities, in a way that is fair and efficient. A standard fairness concept is envy-freeness, which requires that each agent prefers her own allocation over the allocation of any other agent. Even though envy is clearly unavoidable in this context - consider the case of a single indivisible good and two agents - providing approximately envy-free solutions is possible [3, 6]. Specifically, an allocation is envy-free up to one item (EF1) if for every pair of agents i and j, any envy i has for j can be eliminated by removing at most one good from j's bundle. Recently, Caragiannis et al. [3] show that the allocation that maximizes the product of the agents' utilities (with ties broken based on the number of agents with positive utility) is EF1 and Pareto efficient. The majority of the literature to date has focused on the case where the items are available to the algorithm upfront. In many situations of interest, however, items arrive online. A paradigmatic example is that of food banks [1, 5]. Food banks across the world receive food donations they must allocate; these donations are often perishable, and thus allocation decisions must be made quickly, and donations are typically leftovers, leading to uncertainty about items that will arrive in the future. Benadè et al. [2] study this problem, but focus only on fairness. They show that there exists a deterministic algorithm with vanishing envy, that is, the maximum pairwise envy (after all T items have been allocated) is sublinear in T , when the value vit of agent i for the t-th item is normalized to be in [0, 1]. Specifically, the envy is guaranteed to be at most O(√p T logT /n), and this guarantee is tight up to polylogarithmic factors. The same guarantee can also be achieved by the simple randomized algorithm that allocates each item to a uniformly random agent. These results hold even against an adaptive adversary that selects the value vit after seeing the allocation of the first t - 1 items. On the other hand, if we focus only on efficiency, our task is much easier. For example, we could simply allocate each item to the agent with the highest value. But, and this brings us to our interest here, the question remains: How should we make allocation decisions online in a way that is fair to the donation recipients, but also as efficient as possible?
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal … We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal is to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. On the positive side, we provide an online algorithm that achieves a competitive ratio of $O(\log N)$ and $O(\log T)$, but also a stronger competitive ratio of $O(\log k)$ in settings where the value of any agent for her most preferred item is no more than $k$ times her average value. We complement this by showing this bound is essentially tight: no online algorithm can achieve a competitive ratio of $O(\log^{1-\epsilon} N)$ or $O(\log^{1-\epsilon} T)$ for any constant $\epsilon>0$.
We consider K -Facility Location games, where n strategic agents report their locations in a metric space and a mechanism maps them to K facilities. The agents seek to minimize … We consider K -Facility Location games, where n strategic agents report their locations in a metric space and a mechanism maps them to K facilities. The agents seek to minimize their connection cost, namely the distance of their true location to the nearest facility, and may misreport their location. We are interested in deterministic mechanisms that are strategyproof, that is, ensure that no agent can benefit from misreporting her location, do not resort to monetary transfers, and achieve a bounded approximation ratio to the total connection cost of the agents (or to the L p norm of the connection costs, for some p ∈ [1, ∞) or for p = ∞). Our main result is an elegant characterization of deterministic strategyproof mechanisms with a bounded approximation ratio for 2-Facility Location on the line. In particular, we show that for instances with n ≥ 5 agents, any such mechanism either admits a unique dictator or always places the facilities at the leftmost and the rightmost location of the instance. As a corollary, we obtain that the best approximation ratio achievable by deterministic strategyproof mechanisms for the problem of locating 2 facilities on the line to minimize the total connection cost is precisely n -2. Another rather surprising consequence is that the Two-Extremes mechanism of Procaccia and Tennenholtz [2009] is the only deterministic anonymous strategyproof mechanism with a bounded approximation ratio for 2-Facility Location on the line. The proof of the characterization employs several new ideas and technical tools, which provide new insights into the behavior of deterministic strategyproof mechanisms for K -Facility Location games and may be of independent interest. Employing one of these tools, we show that for every K ≥ 3, there do not exist any deterministic anonymous strategyproof mechanisms with a bounded approximation ratio for K -Facility Location on the line, even for simple instances with K +1 agents. Moreover, building on the characterization for the line, we show that there do not exist any deterministic strategyproof mechanisms with a bounded approximation ratio for 2-Facility Location and instances with n ≥ 3 agents located in a star.
In this paper, we consider the following dynamic fair allocation problem: Given a sequence of job arrivals and departures, the goal is to maintain an approximately fair allocation of the … In this paper, we consider the following dynamic fair allocation problem: Given a sequence of job arrivals and departures, the goal is to maintain an approximately fair allocation of the resource against a target fair allocation policy, while minimizing the total number of disruptions, which is the number of times the allocation of any job is changed. We consider a rich class of fair allocation policies that significantly generalize those considered in previous work.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Graph Algorithms with PredictionsYossi Azar, Debmalya Panigrahi, and Noam TouitouYossi Azar, Debmalya Panigrahi, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Graph Algorithms with PredictionsYossi Azar, Debmalya Panigrahi, and Noam TouitouYossi Azar, Debmalya Panigrahi, and Noam Touitoupp.35 - 66Chapter DOI:https://doi.org/10.1137/1.9781611977073.3PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Online algorithms with predictions is a popular and elegant framework for bypassing pessimistic lower bounds in competitive analysis. In this model, online algorithms are supplied with future predictions and the goal is for the competitive ratio to smoothly interpolate between the best offline and online bounds as a function of the prediction error. In this paper, we study online graph problems with predictions. Our contributions are the following: The first question is defining prediction error. For graph/metric problems, there can be two types of error, locations that are not predicted, and locations that are predicted but the predicted and actual locations do not coincide exactly. We design a novel definition of prediction error called metric error with outliers to simultaneously capture both types of errors, which thereby generalizes previous definitions of error that only capture one of the two error types. We give a general framework for obtaining online algorithms with predictions that combines, in a "black box" fashion, existing online and offline algorithms, under certain technical conditions. To the best of our knowledge, this is the first general-purpose tool for obtaining online algorithms with predictions. Using our framework, we obtain tight bounds on the competitive ratio of several classical graph problems as a function of metric error with outliers: Steiner tree, Steiner forest, priority Steiner tree/forest, and uncapacitated/capacitated facility location. Both the definition of metric error with outliers and the general framework for combining offline and online algorithms are not specific to the problems that we consider in this paper. We hope that these will be useful for future work on other problems in this domain. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget … Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization … We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization [12], use menus of infinite size [9], and may be computationally intractable [8]. This has sparked recent interest in finding simple mechanisms that obtain reasonable approximations to the optimal revenue [10, 15, 3]. In this work we attempt to find the optimal simple mechanism.
The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the … The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether there exist mechanisms that can implement fair outcomes, despite the agents' strategic behavior. A fundamental objective function used for measuring the fairness of an allocation is the geometric mean of the agents' values, known as the Nash social welfare (NSW). This objective function is maximized by widely known solution concepts such as Nash bargaining and the competitive equilibrium with equal incomes.
We study the problem of a budget limited buyer who wants to buy a set of items, each from a different seller, to maximize her value. The budget feasible mechanism … We study the problem of a budget limited buyer who wants to buy a set of items, each from a different seller, to maximize her value. The budget feasible mechanism design problem requires the design a mechanism that incentivizes the sellers to truthfully report their cost and maximizes the buyer’s value while guaranteeing that the total payment does not exceed her budget. Such budget feasible mechanisms can model a buyer in a crowdsourcing market interested in recruiting a set of workers (sellers) to accomplish a task for her. This budget feasible mechanism design problem was introduced by Singer in 2010. We consider the general case where the buyer’s valuation is a monotone submodular function. There are a number of truthful mechanisms known for this problem. We offer two general frameworks for simple mechanisms, and by combining these frameworks, we significantly improve on the best known results, while also simplifying the analysis. For example, we improve the approximation guarantee for the general monotone submodular case from 7.91 to 5 and for the case of large markets (where each individual item has negligible value) from 3 to 2.58. More generally, given an r approximation algorithm for the optimization problem (ignoring incentives), our mechanism is a r + 1 approximation mechanism for large markets, an improvement from 2 r 2 . We also provide a mechanism without the large market assumption, where we achieve a 4 r + 1 approximation guarantee. We also show how our results can be used for the problem of a principal hiring in a Crowdsourcing Market to select a set of tasks subject to a total budget.
In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the … In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark, or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(2+ \sqrt{2})$ for the weaker integral benchmark.
Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably … Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. To the best of our knowledge, this is the first use of large market assumptions in the fair division literature. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most $2\rho$, using a $\rho$-approximate allocation on input instead of the maximum Nash welfare one.
We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. … We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. A set of $k$ terminal vertices or players needs to establish connectivity with the root. The social optimum is the minimum Steiner tree. We study situations where the designer has incomplete information about the input. We propose two different models, the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying graph metric, but the requested subset of the players is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model). In the adversarial model, the goal of the designer is to choose a single, universal cost-sharing protocol that has low Price of Anarchy (PoA) for all possible requested subsets of players. The main question we address is, to what extent can prior knowledge of the underlying graph metric help in the design? We first demonstrate that there exist classes of graphs where knowledge of the underlying graph metric can dramatically improve the performance of good network cost-sharing design. For outerplanar graph metrics, we provide a universal cost-sharing protocol with constant PoA, in contrast to protocols that, by ignoring the graph metric, cannot achieve PoA better than $\Omega(\log k)$. Then, in our main technical result, we show that there exist graph metrics for which knowing the underlying graph metric does not help and any universal protocol has PoA of $\Omega(\log k)$, which is tight. We attack this problem by developing new techniques that employ powerful tools from extremal combinatorics, and more specifically Ramsey theory in high-dimensional hypercubes. Then we switch to the stochastic model, where the players are activated according to some probability distribution that is known to the designer. We show that there exists a randomized ordered protocol that achieves constant PoA. If, further, each player is activated independently with some probability, by using standard derandomization techniques, we produce a deterministic ordered protocol that achieves constant PoA. We remark that the first result holds also for the black-box model, where the probabilities are not known to the designer, but she is allowed to draw independent (polynomially many) samples.
It is standard in computational social choice to analyse welfare considerations under the assumptions of normalized utilities. In this note, we summarize some common reasons for this approach. We then … It is standard in computational social choice to analyse welfare considerations under the assumptions of normalized utilities. In this note, we summarize some common reasons for this approach. We then mention another justification which is ignored but has solid normative appeal. The central concept used in the `new' justification can also be used more widely as a social objective.
The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the … We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the most compelling notion of fairness in the context of indivisible goods. Although the existence of EFX is not known beyond the simple case of two agents with subadditive valuations, some good approximations of EFX are known to exist, namely 1/2-EFX allocation and EFX allocations with bounded charity. Nash welfare (the geometric mean of agents' valuations) is one of the most commonly used measures of efficiency. In case of additive valuations, an allocation that maximizes Nash welfare also satisfies fairness properties like Envy-Free up to one good (EF1). Although there is substantial work on approximating Nash welfare when agents have additive valuations, very little is known when agents have subadditive valuations. In this paper, we design a polynomial-time algorithm that outputs an allocation that satisfies either of the two approximations of EFX as well as achieves an O(n) approximation to the Nash welfare. Our result also improves the current best-known approximation of O(n log n) and O(m) to Nash welfare when agents have submodular and subadditive valuations, respectively. Furthermore, our technique also gives an O(n) approximation to a family of welfare measures, p-mean of valuations for p in (-\infty, 1], thereby also matching asymptotically the current best approximation ratio for special cases like p = -\infty while also retaining the remarkable fairness properties.
We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and … We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and envy-freeness up to any good (EFX). We establish that an MNW allocation is always EFX as long as there are at most two possible values for the goods, whereas this implication is no longer true for three or more distinct values. As a notable consequence, this proves the existence of EFX allocations for these restricted valuation functions. While the efficient computation of an MNW allocation for two possible values remains an open problem, we present a novel algorithm for directly constructing EFX allocations in this setting. Finally, we study the question of whether an MNW allocation implies any EFX guarantee for general additive valuation functions under a natural new interpretation of approximate EFX allocations.
We study the problem of allocating divisible resources to agents with different preferences. We analyze a market game known as Trading Post, first considered by Shapley and Shubik, where each … We study the problem of allocating divisible resources to agents with different preferences. We analyze a market game known as Trading Post, first considered by Shapley and Shubik, where each agent gets a budget of virtual currency to bid on goods: after bids are placed, goods are allocated to players in proportion to their bids. In this setting, the agents choose their bids strategically, aiming to maximize their utility, and this gives rise to a game. We study the equilibrium allocations of this game, measuring the quality of an allocation via the Nash social welfare, the geometric mean of utilities (a measure of aggregate welfare that respects individual needs). We show that any Nash equilibrium of Trading Post approximates the optimal Nash welfare within a factor of two for all concave valuations, and the mechanism is essentially optimal for Leontief valuations.
In the budget-feasible allocation problem, a set of items with varied sizes and values are to be allocated to a group of agents. Each agent has a budget constraint on … In the budget-feasible allocation problem, a set of items with varied sizes and values are to be allocated to a group of agents. Each agent has a budget constraint on the total size of items she can receive. The goal is to compute a feasible allocation that is envy-free (EF), in which the agents do not envy each other for the items they receive, nor do they envy a charity, who is endowed with all the unallocated items. Since EF allocations barely exist even without budget constraints, we are interested in the relaxed notion of envy-freeness up to one item (EF1). The computation of both exact and approximate EF1 allocations remains largely open, despite a recent effort by Wu et al. (IJCAI 2021) in showing that any budget-feasible allocation that maximizes the Nash Social Welfare (NSW) is 1/4-approximate EF1. In this paper, we move one step forward by showing that for agents with identical additive valuations, a 1/2-approximate EF1 allocation can be computed in polynomial time. For the uniform-budget and two-agent cases, we propose efficient algorithms for computing an exact EF1 allocation. We also consider the large budget setting, i.e., when the item sizes are infinitesimal compared with the agents' budgets, and show that both the NSW maximizing allocation and the allocation our polynomial-time algorithm computes have an approximation close to 1 regarding EF1.
The fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a "fair" manner, where every … The fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a "fair" manner, where every agent has a valuation for each subset of goods. We assume monotone valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is "envy-freeness up to any good," EFX, where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists. We show there is always a partition of the set of goods into $n+1$ subsets $(X_1,\ldots,X_n,P)$, where for $i \in [n]$, $X_i$ is the bundle allocated to agent $i$ and the set $P$ is unallocated (or donated to charity) such that we have (1) envy-freeness up to any good, (2) no agent values $P$ higher than her own bundle, and (3) fewer than $n$ goods go to charity, i.e., $|P| < n$ (typically $m \gg n$). Our proof is constructive and leads to a pseudopolynomial time algorithm to find such an allocation. When agents have additive valuations and $|{P}|$ is large (i.e., when $|P|$ is close to $n$), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation that is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. (Very recently and independently, Amanatidis, Ntokos, and Markakis [Theoret. Comput. Sci., 841 (2020), pp. 94--109], also showed the existence of a 4/7-GMMS allocation.)
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Nash Social Welfare Maximization with PredictionsSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Nash Social Welfare Maximization with PredictionsSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy JinSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy Jinpp.1 - 19Chapter DOI:https://doi.org/10.1137/1.9781611977073.1PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the problem of allocating a set of divisible goods to N agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a sequence of T periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial O(N), unless it is given additional information about the agents' values. Then, in line with the emerging area of "algorithms with predictions", we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the T periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of O(log N) and O(log T) if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of O(log1–∊ N) or O(log1–∊ T) for any constant ∊ > 0. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
The theory of algorithmic fair allocation is within the center of multi-agent systems and economics in the last decade due to its industrial and social importance. At a high level, … The theory of algorithmic fair allocation is within the center of multi-agent systems and economics in the last decade due to its industrial and social importance. At a high level, the problem is to assign a set of items that are either goods or chores to a set of agents so that every agent is happy with what she obtains. Particularly, in this survey, we focus on indivisible items, for which absolute fairness such as envy-freeness and proportionality cannot be guaranteed. One main theme in the recent research agenda is about designing algorithms that approximately achieve the fairness criteria. We aim at presenting a comprehensive survey of recent progresses through the prism of algorithms, highlighting the ways to relax fairness notions and common techniques to design algorithms, as well as the most interesting questions for future research.
Allocating resources to individuals in a fair manner has been a topic of interest since the ancient times, with most of the early rigorous mathematical work on the problem focusing … Allocating resources to individuals in a fair manner has been a topic of interest since the ancient times, with most of the early rigorous mathematical work on the problem focusing on infinitely divisible resources. Recently, there has been a surge of papers studying computational questions regarding various different notions of fairness for the indivisible case, like maximin share fairness (MMS) and envy-freeness up to any good (EFX). We survey the most important results in the discrete fair division literature, focusing on the case of additive valuation functions and paying particular attention to the progress made in the last 10 years.
We study the fair allocation of indivisible goods among agents with identical, additive valuations but individual budget constraints. Here, the indivisible goods--each with a specific size and value--need to be … We study the fair allocation of indivisible goods among agents with identical, additive valuations but individual budget constraints. Here, the indivisible goods--each with a specific size and value--need to be allocated such that the bundle assigned to each agent is of total size at most the agent's budget. Since envy-free allocations do not necessarily exist in the indivisible goods context, compelling relaxations--in particular, the notion of envy-freeness up to k goods (EFk)--have received significant attention in recent years. In an EFk allocation, each agent prefers its own bundle over that of any other agent, up to the removal of k goods, and the agents have similarly bounded envy against the charity (which corresponds to the set of all unallocated goods). It has been shown in prior work that an allocation that satisfies the budget constraints and maximizes the Nash social welfare is 1/4-approximately EF1. However, the computation (or even existence) of exact EFk allocations remained an intriguing open problem. We make notable progress towards this by proposing a simple, greedy, polynomial-time algorithm that computes EF2 allocations under budget constraints. Our algorithmic result implies the universal existence of EF2 allocations in this fair division context. The analysis of the algorithm exploits intricate structural properties of envy-freeness. Interestingly, the same algorithm also provides EF1 guarantees for important special cases. Specifically, we settle the existence of EF1 allocations for instances in which: (i) the value of each good is proportional to its size, (ii) all the goods have the same size, or (iii) all the goods have the same value. Our EF2 result even extends to the setting wherein the goods' sizes are agent specific.
We study fair division of goods under the broad class of generalized assignment constraints. In this constraint framework, the sizes and values of the goods are agent-specific, and one needs … We study fair division of goods under the broad class of generalized assignment constraints. In this constraint framework, the sizes and values of the goods are agent-specific, and one needs to allocate the goods among the agents fairly while further ensuring that each agent receives a bundle of total size at most the corresponding budget of the agent. Since, in such a constraint setting, it may not always be feasible to partition all the goods among the agents, we conform---as in recent works---to the construct of charity to designate the set of unassigned goods. For this allocation framework, we obtain existential and computational guarantees for envy-free (appropriately defined) allocation of divisible and indivisible goods, respectively, among agents with individual, additive valuations for the goods.
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free … We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto-efficient. We, therefore, turn to approximation and use the (Pareto-efficient) maximum Nash welfare allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
A major problem in fair division is how to allocate a set of indivisible resources among agents fairly and efficiently. The goal of this work is to characterize the tradeoffs … A major problem in fair division is how to allocate a set of indivisible resources among agents fairly and efficiently. The goal of this work is to characterize the tradeoffs between two well-studied measures of fairness and efficiency --- envy freeness up to any item (EFX) for fairness, and Nash welfare for efficiency --- by saying, for given constants α and β, whether there exists an α-EFX allocation that guarantees a β-fraction of the maximum Nash welfare (β-MNW). For additive valuations, we show that for any α ∈ [0,1], there exists a partial allocation that is α-EFX and 1/(α+1)-MNW. This tradeoff turns out to be tight (for every α) as demonstrated by an impossibility result that we give. We also show that for α ∈ [0, φ-1 ≃ 0.618] these partial allocations can be turned into complete allocations where all items are assigned. Furthermore, for any α ∈ [0, 1/2], we show that the tight tradeoff of α-EFX and 1/(α+1)-MNW with complete allocations holds for the more general setting of subadditive valuations. Our results improve upon the current state of the art, for both additive and subadditive valuations, and match the best-known approximations of EFX under complete allocations, regardless of Nash welfare guarantees. Notably, our constructions for additive valuations also provide EF1 and constant approximations for maximin share guarantees.
We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents … We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents and a budget, while the agents have private costs for being hired. We consider both additive and submodular value functions of the principal. We show that there are simple, practical, ex post budget balanced posted pricing mechanisms that approximate the value obtained by the Bayesian optimal mechanism that is budget balanced only in expectation. A main motivating application for this work is crowdsourcing, e.g., on Mechanical Turk, where workers are drawn from a large population and posted pricing is standard. Our analysis methods relate to contention resolution schemes in submodular optimization of Vondràk et al. and the correlation gap analysis of Yan.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) … We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) for participating in the survey. Individuals must be compensated for their participation and are strategic agents, and so the payment scheme must incentivize truthful behavior. We derive optimal truthful mechanisms for this problem for the two goals of minimizing the variance of the estimator given a fixed budget, and minimizing the expected cost of the survey given a fixed variance goal.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, … We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
We study combinatorial procurement auctions, where a buyer with a valuation function v and budget B wishes to buy a set of items. Each item i has a cost ci … We study combinatorial procurement auctions, where a buyer with a valuation function v and budget B wishes to buy a set of items. Each item i has a cost ci and the buyer is interested in a set S that maximizes v(S) subject to ∑i∈Sci ≤ β. Special cases of combinatorial procurement auctions are well-studied problems from submodular optimization. In particular, when the costs are all equal (cardinality constraint), a classic result by Nemhauser et al shows that the greedy algorithm provides an e/e-1 approximation.