Author Description

Login to generate an author description

Ask a Question About This Mathematician

We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that … We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain these approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given fixed assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed.
Many set selection and ranking algorithms have recently been enhanced with diversity constraints that aim to explicitly increase representation of historically disadvantaged populations, or to improve the over-all representativeness of … Many set selection and ranking algorithms have recently been enhanced with diversity constraints that aim to explicitly increase representation of historically disadvantaged populations, or to improve the over-all representativeness of the selected set. An unintended consequence of these constraints, however, is reduced in-group fairness: the selected candidates from a given group may not be the best ones, and this unfairness may not be well-balanced across groups. In this paper we study this phenomenon using datasets that comprise multiple sensitive attributes. We then introduce additional constraints, aimed at balancing the in-group fairness across groups, and formalize the induced optimization problems as integer linear programs. Using these programs, we conduct an experimental evaluation with real datasets, and quantify the feasible trade-offs between balance and overall performance in the presence of diversity constraints.
The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the … The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether there exist mechanisms that can implement fair outcomes, despite the agents' strategic behavior. A fundamental objective function used for measuring the fairness of an allocation is the geometric mean of the agents' values, known as the Nash social welfare (NSW). This objective function is maximized by widely known solution concepts such as Nash bargaining and the competitive equilibrium with equal incomes.
We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose … We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose a point in C whose total distance from the points in V is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in V, a ranking of its distances to the points in C. We propose algorithms that choose a point in C using only these rankings as input and we provide bounds on their distortion (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where V represents a set of voters, C represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion 3. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion 3, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the ranking-matching lemma. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion 3. We also provide more refined, parameterized, bounds using the notion of decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Nash Social Welfare Maximization with PredictionsSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Nash Social Welfare Maximization with PredictionsSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy JinSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy Jinpp.1 - 19Chapter DOI:https://doi.org/10.1137/1.9781611977073.1PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the problem of allocating a set of divisible goods to N agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a sequence of T periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial O(N), unless it is given additional information about the agents' values. Then, in line with the emerging area of "algorithms with predictions", we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the T periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of O(log N) and O(log T) if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of O(log1–∊ N) or O(log1–∊ T) for any constant ∊ > 0. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study the problem of allocating divisible resources to agents with different preferences. We analyze a market game known as Trading Post, first considered by Shapley and Shubik, where each … We study the problem of allocating divisible resources to agents with different preferences. We analyze a market game known as Trading Post, first considered by Shapley and Shubik, where each agent gets a budget of virtual currency to bid on goods: after bids are placed, goods are allocated to players in proportion to their bids. In this setting, the agents choose their bids strategically, aiming to maximize their utility, and this gives rise to a game. We study the equilibrium allocations of this game, measuring the quality of an allocation via the Nash social welfare, the geometric mean of utilities (a measure of aggregate welfare that respects individual needs). We show that any Nash equilibrium of Trading Post approximates the optimal Nash welfare within a factor of two for all concave valuations, and the mechanism is essentially optimal for Leontief valuations.
In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to … In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design.
Many set selection and ranking algorithms have recently been enhanced with diversity constraints that aim to explicitly increase representation of historically disadvantaged populations, or to improve the overall representativeness of … Many set selection and ranking algorithms have recently been enhanced with diversity constraints that aim to explicitly increase representation of historically disadvantaged populations, or to improve the overall representativeness of the selected set. An unintended consequence of these constraints, however, is reduced in-group fairness: the selected candidates from a given group may not be the best ones, and this unfairness may not be well-balanced across groups. In this paper we study this phenomenon using datasets that comprise multiple sensitive attributes. We then introduce additional constraints, aimed at balancing the \in-group fairness across groups, and formalize the induced optimization problems as integer linear programs. Using these programs, we conduct an experimental evaluation with real datasets, and quantify the feasible trade-offs between balance and overall performance in the presence of diversity constraints.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Truthful Cardinal Mechanism for One-Sided MatchingRediet Abebe, Richard Cole, Vasilis Gkatzelis, and Jason D. … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Truthful Cardinal Mechanism for One-Sided MatchingRediet Abebe, Richard Cole, Vasilis Gkatzelis, and Jason D. HartlineRediet Abebe, Richard Cole, Vasilis Gkatzelis, and Jason D. Hartlinepp.2096 - 2113Chapter DOI:https://doi.org/10.1137/1.9781611975994.129PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We revisit the well-studied problem of designing mechanisms for one-sided matching markets, where a set of n agents needs to be matched to a set of n heterogeneous items. Each agent i has a value νi,j for each item j, and these values are private information that the agents may misreport if doing so leads to a preferred outcome. Ensuring that the agents have no incentive to misreport requires a careful design of the matching mechanism, and mechanisms proposed in the literature mitigate this issue by eliciting only the ordinal preferences of the agents, i.e., their ranking of the items from most to least preferred. However, the efficiency guarantees of these mechanisms are based only on weak measures that are oblivious to the underlying values. In this paper we achieve stronger performance guarantees by introducing a mechanism that truthfully elicits the full cardinal preferences of the agents, i.e., all of the νi,j values. We evaluate the performance of this mechanism using the much more demanding Nash bargaining solution as a benchmark, and we prove that our mechanism significantly outperforms all ordinal mechanisms (even non-truthful ones). To prove our approximation bounds, we also study the population monotonicity of the Nash bargaining solution in the context of matching markets, providing both upper and lower bounds which are of independent interest. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial … We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can … A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions).
A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness … A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is fair while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no fair algorithm that guarantees more than 93.3% of the optimal social welfare.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi Tanpp.2940 - 2963Chapter DOI:https://doi.org/10.1137/1.9781611977073.114PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer's budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study the problem faced by a data analyst or platform that wishes to collect private data from privacy-aware agents. To incentivize participation, in exchange for this data, the platform … We study the problem faced by a data analyst or platform that wishes to collect private data from privacy-aware agents. To incentivize participation, in exchange for this data, the platform provides a service to the agents in the form of a statistic computed using all agents' submitted data. The agents decide whether to join the platform (and truthfully reveal their data) or not participate by considering both the privacy costs of joining and the benefit they get from obtaining the statistic. The platform must ensure the statistic is computed differentially privately and chooses a central level of noise to add to the computation, but can also induce personalized privacy levels (or costs) by giving different weights to different agents in the computation as a function of their heterogeneous privacy preferences (which are known to the platform). We assume the platform aims to optimize the accuracy of the statistic, and must pick the privacy level of each agent to trade-off between i) incentivizing more participation and ii) adding less noise to the estimate. We provide a semi-closed form characterization of the optimal choice of agent weights for the platform in two variants of our model. In both of these models, we identify a common nontrivial structure in the platform's optimal solution: an instance-specific number of agents with the least stringent privacy requirements are pooled together and given the same weight, while the weights of the remaining agents decrease as a function of the strength of their privacy requirement. We also provide algorithmic results on how to find the optimal value of the noise parameter used by the platform and of the weights given to the agents.
We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. … We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm's performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L)T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.
We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: -- A new integer program … We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: -- A new integer program for the NSW maximization problem whose fractional relaxation has a bounded integrality gap. In contrast, the natural integer program has an unbounded integrality gap. -- An improved, and tight, factor 2 analysis of the algorithm of [7]; in turn showing that the integrality gap of the above relaxation is at most 2. The approximation factor shown by [7] was $2e^{1/e} \approx 2.89$. -- A lower bound of $e^{1/e}\approx 1.44$ on the integrality gap of this relaxation. -- New convex programs for natural generalizations of linear Fisher markets and proofs that these markets admit rational equilibria. These results were obtained by establishing connections between previously known disparate results, and they help uncover their mathematical underpinnings. We show a formal connection between the convex programs of Eisenberg and Gale and that of Shmyrev, namely that their duals are equivalent up to a change of variables. Both programs capture equilibria of linear Fisher markets. By adding suitable constraints to Shmyrev's program, we obtain a convex program that captures equilibria of the spending-restricted market model defined by [7] in the context of the NSW maximization problem. Further, adding certain integral constraints to this program we get the integer program for the NSW mentioned above. The basic tool we use is convex programming duality. In the special case of convex programs with linear constraints (but convex objectives), we show a particularly simple way of obtaining dual programs, putting it almost at par with linear program duality. This simple way of finding duals has been used subsequently for many other applications.
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free … We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto-efficient. We, therefore, turn to approximation and use the (Pareto-efficient) maximum Nash welfare allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: -- A new integer program … We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: -- A new integer program for the NSW maximization problem whose fractional relaxation has a bounded integrality gap. In contrast, the natural integer program has an unbounded integrality gap. -- An improved, and tight, factor 2 analysis of the algorithm of [7]; in turn showing that the integrality gap of the above relaxation is at most 2. The approximation factor shown by [7] was $2e^{1/e} \approx 2.89$. -- A lower bound of $e^{1/e}\approx 1.44$ on the integrality gap of this relaxation. -- New convex programs for natural generalizations of linear Fisher markets and proofs that these markets admit rational equilibria. These results were obtained by establishing connections between previously known disparate results, and they help uncover their mathematical underpinnings. We show a formal connection between the convex programs of Eisenberg and Gale and that of Shmyrev, namely that their duals are equivalent up to a change of variables. Both programs capture equilibria of linear Fisher markets. By adding suitable constraints to Shmyrev's program, we obtain a convex program that captures equilibria of the spending-restricted market model defined by [7] in the context of the NSW maximization problem. Further, adding certain integral constraints to this program we get the integer program for the NSW mentioned above. The basic tool we use is convex programming duality. In the special case of convex programs with linear constraints (but convex objectives), we show a particularly simple way of obtaining dual programs, putting it almost at par with linear program duality. This simple way of finding duals has been used subsequently for many other applications.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which … We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a sequence of $T$ periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial $O(N)$, unless it is given additional information about the agents' values. Then, in line with the emerging area of "algorithms with predictions", we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the $T$ periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of $O(\log N)$ and $O(\log T)$ if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of $O(\log^{1-\epsilon} N)$ or $O(\log^{1-\epsilon} T)$ for any constant $\epsilon>0$.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior … We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of n alternatives and output a single alternative, aiming … We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of n alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule's performance and the best performance achievable given the cardinal preferences.
We study the following metric distortion problem: there are two finite sets of points, $V$ and $C$, that lie in the same metric space, and our goal is to choose … We study the following metric distortion problem: there are two finite sets of points, $V$ and $C$, that lie in the same metric space, and our goal is to choose a point in $C$ whose total distance from the points in $V$ is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in $V$, a ranking of its distances to the points in $C$. We propose algorithms that choose a point in $C$ using only these rankings as input and we provide bounds on their \emph{distortion} (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where $V$ represents a set of voters, $C$ represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion $3$. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion $3$, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the \emph{ranking-matching lemma}. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion $3$. We also provide more refined, parameterized, bounds using the notion of $\alpha$-decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
We study the extent to which decentralized cost-sharing protocols can achieve good price of anarchy (PoA) bounds in network cost-sharing games with $n$ agents. We focus on the model of … We study the extent to which decentralized cost-sharing protocols can achieve good price of anarchy (PoA) bounds in network cost-sharing games with $n$ agents. We focus on the model of resource-aware protocols, where the designer has prior access to the network structure and can also increase the total cost of an edge(overcharging), and we study classes of games with concave or convex cost functions. We first consider concave cost functions and our main result is a cost-sharing protocol for symmetric games on directed acyclic graphs that achieves a PoA of $2+\varepsilon$ for some arbitrary small positive $\varepsilon$, which improves to $1+\varepsilon$ for games with at least two players. We also achieve a PoA of 1 for series-parallel graphs and show that no protocol can achieve a PoA better than $\Omega(\sqrt{n})$ for multicast games. We then also consider convex cost functions and prove analogous results for series-parallel networks and multicast games, as well as a lower bound of $\Omega(n)$ for the PoA on directed acyclic graphs without the use of overcharging.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal … We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal is to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. On the positive side, we provide an online algorithm that achieves a competitive ratio of $O(\log N)$ and $O(\log T)$, but also a stronger competitive ratio of $O(\log k)$ in settings where the value of any agent for her most preferred item is no more than $k$ times her average value. We complement this by showing this bound is essentially tight: no online algorithm can achieve a competitive ratio of $O(\log^{1-\epsilon} N)$ or $O(\log^{1-\epsilon} T)$ for any constant $\epsilon>0$.
The fair division of resources is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether … The fair division of resources is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether there exist fair mechanisms despite strategic behavior of the agents. A fundamental objective function used for measuring fair outcomes is the Nash social welfare, defined as the geometric mean of the agent utilities. This objective function is maximized by widely known solution concepts such as Nash bargaining and the competitive equilibrium with equal incomes. In this work we focus on the question of (approximately) implementing the Nash social welfare. The starting point of our analysis is the Fisher market, a fundamental model of an economy, whose benchmark is precisely the (weighted) Nash social welfare. We begin by studying two extreme classes of valuations functions, namely perfect substitutes and perfect complements, and find that for perfect substitutes, the Fisher market mechanism has a constant approximation: at most 2 and at least e1e. However, for perfect complements, the Fisher market does not work well, its bound degrading linearly with the number of players. Strikingly, the Trading Post mechanism---an indirect market mechanism also known as the Shapley-Shubik game---has significantly better performance than the Fisher market on its own benchmark. Not only does Trading Post achieve an approximation of 2 for perfect substitutes, but this bound holds for all concave utilities and becomes arbitrarily close to optimal for Leontief utilities (perfect complements), where it reaches $(1+\epsilon)$ for every $\epsilon > 0$. Moreover, all the Nash equilibria of the Trading Post mechanism are pure for all concave utilities and satisfy an important notion of fairness known as proportionality.
We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of … We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of $k$ different values, and her value for the good is a weakly increasing function of all the bidders' signals. The bidders are partitioned into $\ell$ expertise-groups, based on how their signal can impact the values for the good, and we prove upper and lower bounds regarding the approximability of social welfare and revenue for a variety of settings, parameterized by $k$ and $\ell$. Our lower bounds apply to all ex-post incentive compatible mechanisms and our upper bounds are all within a small constant of the lower bounds. Our main results take the appealing form of ascending clock auctions and provide strong incentives by admitting the desired outcomes as obvious ex-post equilibria.
In a single-parameter mechanism design problem, a provider is looking to sell some service to a group of potential buyers. Each buyer i has a private value vi for receiving … In a single-parameter mechanism design problem, a provider is looking to sell some service to a group of potential buyers. Each buyer i has a private value vi for receiving this service, and some feasibility constraint restricts which subsets of buyers can be served simultaneously. Recent work in economics introduced (deferred-acceptance) clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and very strong incentive guarantees. Subsequent work in computer science focused on evaluating these auctions with respect to their social welfare approximation guarantees, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets.
In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the … In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an $n$-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, $n$ remains the best known approximation and very recent work by \citet{CKK21} has been able to prove an $\Omega(\sqrt{n})$ approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the ``learning-augmented framework'', whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of~\citet{NR99} using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is $6$-consistent and $2n$-robust. We thus achieve the ``best of both worlds'': an $O(1)$ consistency and an $O(n)$ robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any $1$-consistent deterministic strategyproof mechanism has unbounded robustness.
In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to … In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to complement the traditional worst-case analysis approach in computer science, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions. The algorithms can use the predictions as a guide to inform their decisions, aiming to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are inaccurate (robustness). We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well, even when the predictions are not fully accurate. Funding: The work of E. Balkanski was supported in part by the National Science Foundation [Grants CCF-2210501 and IIS-2147361]. The work of V. Gkatzelis and X. Tan was supported in part by the National Science Foundation [Grant CCF-2210502] and [CAREER Award CCF-2047907].
We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, … We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, no truthful and deterministic mechanism can guarantee envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can achieve fairness notions that are unattainable by DSIC mechanisms in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. Notably, for the case of indivisible goods, we significantly strengthen the state-of-the-art negative result for efficient DSIC mechanisms, while also highlighting the limitations of BIC mechanisms, by showing that a very general class of welfare objectives is incompatible with Bayesian Incentive Compatibility. Combined, these results give a near-complete picture of the power and limitations of BIC and DSIC mechanisms for the problem of allocating indivisible goods.
A surge of recent work has focused on analyzing the performance of algorithms guided by predictions, aiming to enhance their worst-case performance guarantees with improved guarantees when the predictions are … A surge of recent work has focused on analyzing the performance of algorithms guided by predictions, aiming to enhance their worst-case performance guarantees with improved guarantees when the predictions are accurate. This "learning-augmented" framework was recently also extended to mechanism design settings involving strategic agents and we provide an overview of these results.
How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not … How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes. This paper studies this question, looking at both fairness and efficiency measures. We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings. But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion. Consequently, we seek approximate solutions. We give several truthful mechanisms which achieve proportional fairness in an approximate sense. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s. With regard to efficiency, prior work has shown a lower bound of 0.5 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case. We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case. Interestingly, the new mechanism builds on the notion of proportional fairness.
We study the problem of designing mechanisms to allocate a heterogeneous set of divisible goods among a set of agents in a fair manner. We consider the well known solution … We study the problem of designing mechanisms to allocate a heterogeneous set of divisible goods among a set of agents in a fair manner. We consider the well known solution concept of proportional fairness that has found applications in many real-world scenarios. Although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful manner. To overcome this, in this paper, we give mechanisms which are truthful and achieve proportional fairness in an approximate manner. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s.
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local … We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. With this method we are able to prove the following results. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed. We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of 2.618 for the weighted sum of completion times, and an approximation ratio of 2.5 in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but randomized policy. We show that it achieves an approximation ratio of at most 2.13; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to \pi /2. Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling that achieves a factor of 2+ \epsilon for any \epsilon > 0.
A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness … A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is envy-free while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no envy-free algorithm that guarantees more than 93.3% of the optimal social welfare.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior … We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a … In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a function of the total load assigned to it, and some cost-sharing mechanism distributes this cost among the machine's users. The users choose a machine aiming to minimize their own share of the cost, so the cost-sharing mechanism induces a game among them. We approach this problem from the perspective of a designer who can select which cost-sharing mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced games. Recent work introduced the class of \emph{resource-aware} cost-sharing mechanisms, whose decisions can depend on the set of machines in the system, but are oblivious to the total number of users. These mechanisms can guarantee low PoA bounds for instances where the cost functions of the machines are all convex or concave, but can suffer from very high PoA for cost functions that deviate from these families. In this paper we show that if we enhance the class of resource-aware mechanisms with some prior information regarding the users, then they can achieve low PoA for a much more general family of cost functions. We first show that, as long as the mechanism knows just two of the participating users, then it can assign special roles to them and ensure a constant PoA. We then extend this idea to settings where the mechanism has access to the probability with which each user is present in the system. For all these instances, we provide a mechanism that achieves an expected PoA that is logarithmic in the expected number of users.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial … We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
In a decentralized system with m machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a … In a decentralized system with m machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a function of the total load assigned to it, and some cost-sharing mechanism distributes this cost among the machine's users. The users choose a machine aiming to minimize their own share of the cost, so the cost-sharing mechanism induces a game among them. We approach this problem from the perspective of a designer who can select which cost-sharing mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced games. Recent work introduced the class of resource-aware cost-sharing mechanisms, whose decisions can depend on the set of machines in the system, but are oblivious to the total number of users. These mechanisms can guarantee low PoA bounds for instances where the cost functions of the machines are all convex or concave, but can suffer from very high PoA for cost functions that deviate from these families.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the … We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work.
The problem of fair division known as "cake cutting" has been the focus of multiple papers spanning several decades. The most prominent problem in this line of work has been … The problem of fair division known as "cake cutting" has been the focus of multiple papers spanning several decades. The most prominent problem in this line of work has been to bound the query complexity of computing an envy-free outcome in the Robertson-Webb query model. However, the root of this problem's complexity is somewhat artificial: the agents' values are assumed to be additive across different pieces of the "cake" but infinitely complicated within each piece. This is unrealistic in most of the motivating examples, where the cake represents a finite collection of homogeneous goods. We address this issue by introducing a fair division model that more accurately captures these applications: the value that an agent gains from a given good depends only on the amount of the good they receive, yet it can be an arbitrary function of this amount, allowing the agents to express preferences that go beyond standard cake cutting. In this model, we study the query complexity of computing allocations that are not just envy-free, but also approximately Pareto optimal among all envy-free allocations. Using a novel flow-based approach, we show that we can encode the ex-post feasibility of randomized allocations via a polynomial number of constraints, which reduces our problem to solving a linear program.
In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving … In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving the service and a feasibility constraint restricts which sets of buyers can be served simultaneously. Recent work in economics introduced clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and strong incentive guarantees. Subsequent work focused on evaluating the social welfare approximation guarantees of these auctions, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets. We show that these negative results can be circumvented by using prior information or by leveraging randomization. We provide clock auctions that give a $O(\log\log k)$ approximation for general downward-closed feasibility constraints with $k$ maximal feasible sets for three different information models, ranging from full access to the value distributions to complete absence of information. The more information the seller has, the simpler these auctions are. Under full access, we use a particularly simple deterministic clock auction, called a single-price clock auction, which is only slightly more complex than posted price mechanisms. In this auction, each buyer is offered a single price and a feasible set is selected among those who accept their offers. In the other extreme, where no prior information is available, this approximation guarantee is obtained using a complex randomized clock auction. In addition to our main results, we propose a parameterization that interpolates between single-price clock auctions and general clock auctions, paving the way for an exciting line of future research.
We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of … We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of $k$ different values, and her value for the good is a weakly increasing function of all the bidders' signals. The bidders are partitioned into $\ell$ expertise-groups, based on how their signal can impact the values for the good, and we prove upper and lower bounds regarding the approximability of social welfare and revenue for a variety of settings, parameterized by $k$ and $\ell$. Our lower bounds apply to all ex-post incentive compatible mechanisms and our upper bounds are all within a small constant of the lower bounds. Our main results take the appealing form of ascending clock auctions and provide strong incentives by admitting the desired outcomes as obvious ex-post equilibria.
In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a … In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a function of the total load assigned to it, and some cost-sharing mechanism distributes this cost among the machine's users. The users choose a machine aiming to minimize their own share of the cost, so the cost-sharing mechanism induces a game among them. We approach this problem from the perspective of a designer who can select which cost-sharing mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced games. Recent work introduced the class of \emph{resource-aware} cost-sharing mechanisms, whose decisions can depend on the set of machines in the system, but are oblivious to the total number of users. These mechanisms can guarantee low PoA bounds for instances where the cost functions of the machines are all convex or concave, but can suffer from very high PoA for cost functions that deviate from these families. In this paper we show that if we enhance the class of resource-aware mechanisms with some prior information regarding the users, then they can achieve low PoA for a much more general family of cost functions. We first show that, as long as the mechanism knows just two of the participating users, then it can assign special roles to them and ensure a constant PoA. We then extend this idea to settings where the mechanism has access to the probability with which each user is present in the system. For all these instances, we provide a mechanism that achieves an expected PoA that is logarithmic in the expected number of users.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior … We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial … We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
We study the distortion of one-sided and two-sided matching problems on the line. In the one-sided case, $n$ agents need to be matched to $n$ items, and each agent's cost … We study the distortion of one-sided and two-sided matching problems on the line. In the one-sided case, $n$ agents need to be matched to $n$ items, and each agent's cost in a matching is their distance from the item they were matched to. We propose an algorithm that is provided only with ordinal information regarding the agents' preferences (each agent's ranking of the items from most- to least-preferred) and returns a matching aiming to minimize the social cost with respect to the agents' true (cardinal) costs. We prove that our algorithm simultaneously achieves the best-possible approximation of $3$ (known as distortion) with respect to a variety of social cost measures which include the utilitarian and egalitarian social cost. In the two-sided case, where the agents need be matched to $n$ other agents and both sides report their ordinal preferences over each other, we show that it is always possible to compute an optimal matching. In fact, we show that this optimal matching can be achieved using even less information, and we provide bounds regarding the sufficient number of queries.
The two standard fairness notions in the resource allocation literature are proportionality and envy-freeness. If there are n agents competing for the available resources, then proportionality requires that each agent … The two standard fairness notions in the resource allocation literature are proportionality and envy-freeness. If there are n agents competing for the available resources, then proportionality requires that each agent receives at least a 1/n fraction of their total value for the set of resources. On the other hand, envy-freeness requires that each agent weakly prefers the resources allocated to them over those allocated to any other agent. Each of these notions has its own benefits, but it is well known that neither one of the two is always achievable when the resources being allocated are indivisible. As a result, a lot of work has focused on satisfying fairness notions that relax either proportionality or envy-freeness. In this paper, we focus on MXS (a relaxation of proportionality) and EFL (a relaxation of envy-freeness). Each of these notions was previously shown to be achievable on its own [Barman et al.,2018, Caragiannis et al., 2023], and our main result is an algorithm that computes allocations that simultaneously satisfy both, combining the benefits of approximate proportionality and approximate envy-freeness. In fact, we prove this for any instance involving agents with valuation functions that are restricted MMS-feasible, which are more general than additive valuations. Also, since every EFL allocation directly satisfies other well-studied fairness notions like EF1, 1/2-EFX, 1/2-GMMS, and 2/3-PMMS, and every MXS allocation satisfies 4/7-MMS, the allocations returned by our algorithm simultaneously satisfy a wide variety of fairness notions and are, therefore, universally fair [Amanatidis et al., 2020].
We study the fair allocation of indivisible goods among a group of agents, aiming to limit the envy between any two agents. The central open problem in this literature, which … We study the fair allocation of indivisible goods among a group of agents, aiming to limit the envy between any two agents. The central open problem in this literature, which has proven to be extremely challenging, is regarding the existence of an EFX allocation, i.e., an allocation such that any envy from some agent i toward another agent j would vanish if we were to remove any single good from the bundle allocated to j. When the agents' valuations are additive, which has been the main focus of prior works, Chaudhury et al. [2024] showed that an EFX allocation is guaranteed to exist for all instances involving up to three agents. Subsequently, Berger et al. [2022] extended this guarantee to nice-cancelable valuations and Akrami et al. [2023] to MMS-feasible valuations. However, the existence of EFX allocations for instances involving four agents remains open, even for additive valuations. We contribute to this literature by focusing on EF2X, a relaxation of EFX which requires that any envy toward some agent vanishes if any two of the goods allocated to that agent were to be removed. Our main result shows that EF2X allocations are guaranteed to exist for any instance with four agents, even for the class of cancelable valuations, which is more general than additive. Our proof is constructive, proposing an algorithm that computes such an allocation in pseudopolynomial time. Furthermore, for instances involving three agents we provide an algorithm that computes an EF2X allocation in polynomial time, in contrast to EFX, for which the fastest known algorithm for three agents is only pseudopolynomial.
The widespread use of machine learning models in high-stakes domains can have a major negative impact, especially on individuals who receive undesirable outcomes. Algorithmic recourse provides such individuals with suggestions … The widespread use of machine learning models in high-stakes domains can have a major negative impact, especially on individuals who receive undesirable outcomes. Algorithmic recourse provides such individuals with suggestions of minimum-cost improvements they can make to achieve a desirable outcome in the future. However, machine learning models often get updated over time and this can cause a recourse to become invalid (i.e., not lead to the desirable outcome). The robust recourse literature aims to choose recourses that are less sensitive, even against adversarial model changes, but this comes at a higher cost. To overcome this obstacle, we initiate the study of algorithmic recourse through the learning-augmented framework and evaluate the extent to which a designer equipped with a prediction regarding future model changes can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.
We revisit the canonical problem of strategic facility location and study the power and limitations of randomization in the design of truthful mechanisms augmented with machine-learned predictions. In the strategic … We revisit the canonical problem of strategic facility location and study the power and limitations of randomization in the design of truthful mechanisms augmented with machine-learned predictions. In the strategic facility location problem, a set of agents are asked to report their locations in some metric space and the goal is to use these reported locations to determine where to open a new facility, aiming to optimize some aggregate measure of distance of the agents from that facility. However, the agents are strategic and can misreport their locations if this may lead to a facility location choice that they prefer. The goal is to design truthful mechanisms, which ensure the agents cannot benefit by misreporting. A lot of prior work has studied this problem from a worst-case perspective, but a recent line of work proposed a framework to refine these results when the designer is provided with some, potentially very inaccurate, predictions regarding the agents' true locations. The goal is to simultaneously provide strong consistency guarantees (i.e., guarantees when the predictions provided to the mechanism are correct) and near-optimal robustness guarantees (i.e., guarantees that hold irrespective of how inaccurate the predictions may be). In this work we focus on the power of randomization in this problem and analyze the best approximation guarantees achievable with respect to the egalitarian social cost measure for one- and two-dimensional Euclidean spaces.
We provide the first analysis of clock auctions through the learning-augmented framework. Deferred-acceptance clock auctions are a compelling class of mechanisms satisfying a unique list of highly practical properties, including … We provide the first analysis of clock auctions through the learning-augmented framework. Deferred-acceptance clock auctions are a compelling class of mechanisms satisfying a unique list of highly practical properties, including obvious strategy-proofness, transparency, and unconditional winner privacy, making them particularly well-suited for real-world applications. However, early work that evaluated their performance from a worst-case analysis standpoint concluded that no deterministic clock auction can achieve much better than an $O(\log n)$ approximation of the optimal social welfare (where $n$ is the number of bidders participating in the auction), even in seemingly very simple settings. To overcome this overly pessimistic impossibility result, which heavily depends on the assumption that the designer has no information regarding the preferences of the participating bidders, we leverage the learning-augmented framework. This framework assumes that the designer is provided with some advice regarding what the optimal solution may be. This advice may be the product of machine-learning algorithms applied to historical data, so it can provide very useful guidance, but it can also be highly unreliable. Our main results are learning-augmented clock auctions that use this advice to achieve much stronger performance guarantees whenever the advice is accurate (known as consistency), while simultaneously maintaining worst-case guarantees even if this advice is arbitrarily inaccurate (known as robustness). Specifically, for the standard notion of consistency, we provide a clock auction that achieves the best of both worlds: $(1+\epsilon)$-consistency for any constant $\epsilon > 0$ and $O(\log n)$ robustness. We then also consider a much stronger notion of consistency and provide an auction that achieves the optimal trade-off between this notion of consistency and robustness.
We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, … We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, no truthful and deterministic mechanism can guarantee envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can achieve fairness notions that are unattainable by DSIC mechanisms in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. Notably, for the case of indivisible goods, we significantly strengthen the state-of-the-art negative result for efficient DSIC mechanisms, while also highlighting the limitations of BIC mechanisms, by showing that a very general class of welfare objectives is incompatible with Bayesian Incentive Compatibility. Combined, these results give a near-complete picture of the power and limitations of BIC and DSIC mechanisms for the problem of allocating indivisible goods.
In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to … In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to complement the traditional worst-case analysis approach in computer science, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions. The algorithms can use the predictions as a guide to inform their decisions, aiming to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are inaccurate (robustness). We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well, even when the predictions are not fully accurate. Funding: The work of E. Balkanski was supported in part by the National Science Foundation [Grants CCF-2210501 and IIS-2147361]. The work of V. Gkatzelis and X. Tan was supported in part by the National Science Foundation [Grant CCF-2210502] and [CAREER Award CCF-2047907].
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free … We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto-efficient. We, therefore, turn to approximation and use the (Pareto-efficient) maximum Nash welfare allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. … We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm's performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L)T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.
We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of n alternatives and output a single alternative, aiming … We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of n alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule's performance and the best performance achievable given the cardinal preferences.
A surge of recent work has focused on analyzing the performance of algorithms guided by predictions, aiming to enhance their worst-case performance guarantees with improved guarantees when the predictions are … A surge of recent work has focused on analyzing the performance of algorithms guided by predictions, aiming to enhance their worst-case performance guarantees with improved guarantees when the predictions are accurate. This "learning-augmented" framework was recently also extended to mechanism design settings involving strategic agents and we provide an overview of these results.
We study the problem faced by a data analyst or platform that wishes to collect private data from privacy-aware agents. To incentivize participation, in exchange for this data, the platform … We study the problem faced by a data analyst or platform that wishes to collect private data from privacy-aware agents. To incentivize participation, in exchange for this data, the platform provides a service to the agents in the form of a statistic computed using all agents' submitted data. The agents decide whether to join the platform (and truthfully reveal their data) or not participate by considering both the privacy costs of joining and the benefit they get from obtaining the statistic. The platform must ensure the statistic is computed differentially privately and chooses a central level of noise to add to the computation, but can also induce personalized privacy levels (or costs) by giving different weights to different agents in the computation as a function of their heterogeneous privacy preferences (which are known to the platform). We assume the platform aims to optimize the accuracy of the statistic, and must pick the privacy level of each agent to trade-off between i) incentivizing more participation and ii) adding less noise to the estimate. We provide a semi-closed form characterization of the optimal choice of agent weights for the platform in two variants of our model. In both of these models, we identify a common nontrivial structure in the platform's optimal solution: an instance-specific number of agents with the least stringent privacy requirements are pooled together and given the same weight, while the weights of the remaining agents decrease as a function of the strength of their privacy requirement. We also provide algorithmic results on how to find the optimal value of the noise parameter used by the platform and of the weights given to the agents.
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free … We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto efficient. We, therefore, turn to approximation and use the Nash social welfare maximizing allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
We study the problem of designing voting rules that take as input the ordinal preferences of $n$ agents over a set of $m$ alternatives and output a single alternative, aiming … We study the problem of designing voting rules that take as input the ordinal preferences of $n$ agents over a set of $m$ alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule's performance and the best performance achievable given the cardinal preferences. The work on the distortion of voting rules has been largely divided into two worlds: utilitarian distortion and metric distortion. In the former, the cardinal preferences of the agents correspond to general utilities and the goal is to maximize a normalized social welfare. In the latter, the agents' cardinal preferences correspond to costs given by distances in an underlying metric space and the goal is to minimize the (unnormalized) social cost. Several deterministic and randomized voting rules have been proposed and evaluated for each of these worlds separately, gradually improving the achievable distortion bounds, but none of the known voting rules perform well in both worlds simultaneously. In this work, we prove that one can achieve the best of both worlds by designing new voting rules, that simultaneously achieve near-optimal distortion guarantees in both distortion worlds. We also prove that this positive result does not generalize to the case where the voting rule is provided with the rankings of only the top-$t$ alternatives of each agent, for $t<m$.
We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, … We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, there is no truthful and deterministic mechanism that guarantees envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise-constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can overcome the aforementioned barriers that DSIC mechanisms face in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. En route to our results on BIC mechanisms, we also strengthen the state of the art in terms of negative results for DSIC mechanisms.
In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate … In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate with minimum social cost, defined as the total distance of the chosen candidate from all voters. The challenge is that the algorithm receives only ordinal input from each voter, in the form of a ranked list of candidates in non-decreasing order of their distances from her, whereas the objective function is cardinal. The distortion of an algorithm is its worst-case approximation factor with respect to the optimal social cost. A series of papers culminated in a 3-distortion algorithm, which is tight with respect to all deterministic algorithms. Aiming to overcome the limitations of worst-case analysis, we revisit the metric distortion problem through the learning-augmented framework, where the algorithm is provided with some prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to evaluate the performance of the algorithm under a accurate prediction (known as consistency), while simultaneously providing worst-case guarantees even for arbitrarily inaccurate predictions (known as robustness). For our main result, we characterize the robustness-consistency Pareto frontier for the metric distortion problem. We first identify an inevitable trade-off between robustness and consistency. We then devise a family of learning-augmented algorithms that achieves any desired robustness-consistency pair on this Pareto frontier. Furthermore, we provide a more refined analysis of the distortion bounds as a function of the prediction error (with consistency and robustness being two extremes). Finally, we also prove distortion bounds that integrate the notion of $\alpha$-decisiveness, which quantifies the extent to which a voter prefers her favorite candidate relative to the rest.
Aiming to overcome some of the limitations of worst-case analysis, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a (possibly erroneous) machine-learned prediction that … Aiming to overcome some of the limitations of worst-case analysis, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a (possibly erroneous) machine-learned prediction that they can use as a guide. In this framework, the goal is to obtain improved guarantees when the prediction is correct, which is called \emph{consistency}, while simultaneously guaranteeing some worst-case bounds even when the prediction is arbitrarily wrong, which is called \emph{robustness}. The vast majority of the work on this framework has focused on a refined analysis of online algorithms augmented with predictions regarding the future input. A subsequent line of work has also successfully adapted this framework to mechanism design, where the prediction is regarding the private information of strategic agents. In this paper, we initiate the study of online mechanism design with predictions, which combines the challenges of online algorithms with predictions and mechanism design with predictions. We consider the well-studied problem of designing a revenue-maximizing auction to sell a single item to strategic bidders who arrive and depart over time, each with an unknown, private, value for the item. We study the learning-augmented version of this problem where the auction designer is given a prediction regarding the maximum value over all agents. Our main result is a strategyproof mechanism whose revenue guarantees are $\alpha$-consistent with respect to the highest value and $(1-\alpha^2)/4$-robust with respect to the second-highest value, for $\alpha \in [0,1]$. We show that this tradeoff is optimal within a broad and natural family of auctions, meaning that any $\alpha$-consistent mechanism in that family has robustness at most $(1-\alpha^2)/4$. Finally, we extend our mechanism to also achieve expected revenues proportional to the prediction quality.
In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to … In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design.
A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can … A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions).
In a single-parameter mechanism design problem, a provider is looking to sell some service to a group of potential buyers. Each buyer i has a private value vi for receiving … In a single-parameter mechanism design problem, a provider is looking to sell some service to a group of potential buyers. Each buyer i has a private value vi for receiving this service, and some feasibility constraint restricts which subsets of buyers can be served simultaneously. Recent work in economics introduced (deferred-acceptance) clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and very strong incentive guarantees. Subsequent work in computer science focused on evaluating these auctions with respect to their social welfare approximation guarantees, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi Tanpp.2940 - 2963Chapter DOI:https://doi.org/10.1137/1.9781611977073.114PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer's budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Nash Social Welfare Maximization with PredictionsSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Nash Social Welfare Maximization with PredictionsSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy JinSiddhartha Banerjee, Vasilis Gkatzelis, Artur Gorokh, and Billy Jinpp.1 - 19Chapter DOI:https://doi.org/10.1137/1.9781611977073.1PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the problem of allocating a set of divisible goods to N agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a sequence of T periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial O(N), unless it is given additional information about the agents' values. Then, in line with the emerging area of "algorithms with predictions", we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the T periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of O(log N) and O(log T) if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of O(log1–∊ N) or O(log1–∊ T) for any constant ∊ > 0. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
The problem of fair division known as "cake cutting" has been the focus of multiple papers spanning several decades. The most prominent problem in this line of work has been … The problem of fair division known as "cake cutting" has been the focus of multiple papers spanning several decades. The most prominent problem in this line of work has been to bound the query complexity of computing an envy-free outcome in the Robertson-Webb query model. However, the root of this problem's complexity is somewhat artificial: the agents' values are assumed to be additive across different pieces of the "cake" but infinitely complicated within each piece. This is unrealistic in most of the motivating examples, where the cake represents a finite collection of homogeneous goods. We address this issue by introducing a fair division model that more accurately captures these applications: the value that an agent gains from a given good depends only on the amount of the good they receive, yet it can be an arbitrary function of this amount, allowing the agents to express preferences that go beyond standard cake cutting. In this model, we study the query complexity of computing allocations that are not just envy-free, but also approximately Pareto optimal among all envy-free allocations. Using a novel flow-based approach, we show that we can encode the ex-post feasibility of randomized allocations via a polynomial number of constraints, which reduces our problem to solving a linear program.
In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving … In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving the service and a feasibility constraint restricts which sets of buyers can be served simultaneously. Recent work in economics introduced clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and strong incentive guarantees. Subsequent work focused on evaluating the social welfare approximation guarantees of these auctions, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets. We show that these negative results can be circumvented by using prior information or by leveraging randomization. We provide clock auctions that give a $O(\log\log k)$ approximation for general downward-closed feasibility constraints with $k$ maximal feasible sets for three different information models, ranging from full access to the value distributions to complete absence of information. The more information the seller has, the simpler these auctions are. Under full access, we use a particularly simple deterministic clock auction, called a single-price clock auction, which is only slightly more complex than posted price mechanisms. In this auction, each buyer is offered a single price and a feasible set is selected among those who accept their offers. In the other extreme, where no prior information is available, this approximation guarantee is obtained using a complex randomized clock auction. In addition to our main results, we propose a parameterization that interpolates between single-price clock auctions and general clock auctions, paving the way for an exciting line of future research.
In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the … In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an $n$-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, $n$ remains the best known approximation and very recent work by \citet{CKK21} has been able to prove an $\Omega(\sqrt{n})$ approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the ``learning-augmented framework'', whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of~\citet{NR99} using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is $6$-consistent and $2n$-robust. We thus achieve the ``best of both worlds'': an $O(1)$ consistency and an $O(n)$ robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any $1$-consistent deterministic strategyproof mechanism has unbounded robustness.
We study the problem faced by a data analyst or platform that wishes to collect private data from privacy-aware agents. To incentivize participation, in exchange for this data, the platform … We study the problem faced by a data analyst or platform that wishes to collect private data from privacy-aware agents. To incentivize participation, in exchange for this data, the platform provides a service to the agents in the form of a statistic computed using all agents' submitted data. The agents decide whether to join the platform (and truthfully reveal their data) or not participate by considering both the privacy costs of joining and the benefit they get from obtaining the statistic. The platform must ensure the statistic is computed differentially privately and chooses a central level of noise to add to the computation, but can also induce personalized privacy levels (or costs) by giving different weights to different agents in the computation as a function of their heterogeneous privacy preferences (which are known to the platform). We assume the platform aims to optimize the accuracy of the statistic, and must pick the privacy level of each agent to trade-off between i) incentivizing more participation and ii) adding less noise to the estimate. We provide a semi-closed form characterization of the optimal choice of agent weights for the platform in two variants of our model. In both of these models, we identify a common nontrivial structure in the platform's optimal solution: an instance-specific number of agents with the least stringent privacy requirements are pooled together and given the same weight, while the weights of the remaining agents decrease as a function of the strength of their privacy requirement. We also provide algorithmic results on how to find the optimal value of the noise parameter used by the platform and of the weights given to the agents.
In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to … In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design. We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness guarantees. This provides the designer with a menu of mechanism options to choose from, depending on her confidence regarding the prediction accuracy. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate.
We design online algorithms for the fair allocation of public goods to a set of $N$ agents over a sequence of $T$ rounds and focus on improving their performance using … We design online algorithms for the fair allocation of public goods to a set of $N$ agents over a sequence of $T$ rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, the algorithm learns every agent's value for the good, and must irrevocably decide the amount of investment in the good without exceeding a total budget of $B$ across all rounds. The algorithm can utilize (potentially inaccurate) predictions of each agent's total value for all the goods to arrive. We measure the performance of the algorithm using a proportional fairness objective, which informally demands that every group of agents be rewarded in proportion to its size and the cohesiveness of its preferences. In the special case of binary agent preferences and a unit budget, we show that $O(\log N)$ proportional fairness can be achieved without using any predictions, and that this is optimal even if perfectly accurate predictions were available. However, for general preferences and budget no algorithm can achieve better than $\Theta(T/B)$ proportional fairness without predictions. We show that algorithms with (reasonably accurate) predictions can do much better, achieving $\Theta(\log (T/B))$ proportional fairness. We also extend this result to a general model in which a batch of $L$ public goods arrive in each round and achieve $O(\log (\min(N,L) \cdot T/B))$ proportional fairness. Our exact bounds are parametrized as a function of the error in the predictions and the performance degrades gracefully with increasing errors.
A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can … A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions). In this paper, we overcome some of the obstacles that the literature has imposed on decentralized mechanisms, by designing mechanisms that are enhanced with predictions regarding the missing information. Specifically, inspired by the big success of the literature on "algorithms with predictions", we design decentralized mechanisms with predictions and evaluate their price of anarchy as a function of the prediction error, focusing on two very well-studied classes of games: scheduling games and multicast network formation games.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior … We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of … We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of $k$ different values, and her value for the good is a weakly increasing function of all the bidders' signals. The bidders are partitioned into $\ell$ expertise-groups, based on how their signal can impact the values for the good, and we prove upper and lower bounds regarding the approximability of social welfare and revenue for a variety of settings, parameterized by $k$ and $\ell$. Our lower bounds apply to all ex-post incentive compatible mechanisms and our upper bounds are all within a small constant of the lower bounds. Our main results take the appealing form of ascending clock auctions and provide strong incentives by admitting the desired outcomes as obvious ex-post equilibria.
In a decentralized system with m machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a … In a decentralized system with m machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a function of the total load assigned to it, and some cost-sharing mechanism distributes this cost among the machine's users. The users choose a machine aiming to minimize their own share of the cost, so the cost-sharing mechanism induces a game among them. We approach this problem from the perspective of a designer who can select which cost-sharing mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced games. Recent work introduced the class of resource-aware cost-sharing mechanisms, whose decisions can depend on the set of machines in the system, but are oblivious to the total number of users. These mechanisms can guarantee low PoA bounds for instances where the cost functions of the machines are all convex or concave, but can suffer from very high PoA for cost functions that deviate from these families.
In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a … In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a function of the total load assigned to it, and some cost-sharing mechanism distributes this cost among the machine's users. The users choose a machine aiming to minimize their own share of the cost, so the cost-sharing mechanism induces a game among them. We approach this problem from the perspective of a designer who can select which cost-sharing mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced games. Recent work introduced the class of \emph{resource-aware} cost-sharing mechanisms, whose decisions can depend on the set of machines in the system, but are oblivious to the total number of users. These mechanisms can guarantee low PoA bounds for instances where the cost functions of the machines are all convex or concave, but can suffer from very high PoA for cost functions that deviate from these families. In this paper we show that if we enhance the class of resource-aware mechanisms with some prior information regarding the users, then they can achieve low PoA for a much more general family of cost functions. We first show that, as long as the mechanism knows just two of the participating users, then it can assign special roles to them and ensure a constant PoA. We then extend this idea to settings where the mechanism has access to the probability with which each user is present in the system. For all these instances, we provide a mechanism that achieves an expected PoA that is logarithmic in the expected number of users.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior … We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial … We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness … A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is fair while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no fair algorithm that guarantees more than 93.3% of the optimal social welfare.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial … We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
We study the problem of allocating divisible resources to agents with different preferences. We analyze a market game known as Trading Post, first considered by Shapley and Shubik, where each … We study the problem of allocating divisible resources to agents with different preferences. We analyze a market game known as Trading Post, first considered by Shapley and Shubik, where each agent gets a budget of virtual currency to bid on goods: after bids are placed, goods are allocated to players in proportion to their bids. In this setting, the agents choose their bids strategically, aiming to maximize their utility, and this gives rise to a game. We study the equilibrium allocations of this game, measuring the quality of an allocation via the Nash social welfare, the geometric mean of utilities (a measure of aggregate welfare that respects individual needs). We show that any Nash equilibrium of Trading Post approximates the optimal Nash welfare within a factor of two for all concave valuations, and the mechanism is essentially optimal for Leontief valuations.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the … We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work.
We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of … We study the problem of selling a good to a group of bidders with interdependent values in a prior-free setting. Each bidder has a signal that can take one of $k$ different values, and her value for the good is a weakly increasing function of all the bidders' signals. The bidders are partitioned into $\ell$ expertise-groups, based on how their signal can impact the values for the good, and we prove upper and lower bounds regarding the approximability of social welfare and revenue for a variety of settings, parameterized by $k$ and $\ell$. Our lower bounds apply to all ex-post incentive compatible mechanisms and our upper bounds are all within a small constant of the lower bounds. Our main results take the appealing form of ascending clock auctions and provide strong incentives by admitting the desired outcomes as obvious ex-post equilibria.
In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a … In a decentralized system with $m$ machines, we study the selfish scheduling problem where each user strategically chooses which machine to use. Each machine incurs a cost, which is a function of the total load assigned to it, and some cost-sharing mechanism distributes this cost among the machine's users. The users choose a machine aiming to minimize their own share of the cost, so the cost-sharing mechanism induces a game among them. We approach this problem from the perspective of a designer who can select which cost-sharing mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced games. Recent work introduced the class of \emph{resource-aware} cost-sharing mechanisms, whose decisions can depend on the set of machines in the system, but are oblivious to the total number of users. These mechanisms can guarantee low PoA bounds for instances where the cost functions of the machines are all convex or concave, but can suffer from very high PoA for cost functions that deviate from these families. In this paper we show that if we enhance the class of resource-aware mechanisms with some prior information regarding the users, then they can achieve low PoA for a much more general family of cost functions. We first show that, as long as the mechanism knows just two of the participating users, then it can assign special roles to them and ensure a constant PoA. We then extend this idea to settings where the mechanism has access to the probability with which each user is present in the system. For all these instances, we provide a mechanism that achieves an expected PoA that is logarithmic in the expected number of users.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior … We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose … We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose a point in C whose total distance from the points in V is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in V, a ranking of its distances to the points in C. We propose algorithms that choose a point in C using only these rankings as input and we provide bounds on their distortion (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where V represents a set of voters, C represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion 3. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion 3, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the ranking-matching lemma. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion 3. We also provide more refined, parameterized, bounds using the notion of decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal … We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal is to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. On the positive side, we provide an online algorithm that achieves a competitive ratio of $O(\log N)$ and $O(\log T)$, but also a stronger competitive ratio of $O(\log k)$ in settings where the value of any agent for her most preferred item is no more than $k$ times her average value. We complement this by showing this bound is essentially tight: no online algorithm can achieve a competitive ratio of $O(\log^{1-\epsilon} N)$ or $O(\log^{1-\epsilon} T)$ for any constant $\epsilon>0$.
We study the extent to which decentralized cost-sharing protocols can achieve good price of anarchy (PoA) bounds in network cost-sharing games with $n$ agents. We focus on the model of … We study the extent to which decentralized cost-sharing protocols can achieve good price of anarchy (PoA) bounds in network cost-sharing games with $n$ agents. We focus on the model of resource-aware protocols, where the designer has prior access to the network structure and can also increase the total cost of an edge(overcharging), and we study classes of games with concave or convex cost functions. We first consider concave cost functions and our main result is a cost-sharing protocol for symmetric games on directed acyclic graphs that achieves a PoA of $2+\varepsilon$ for some arbitrary small positive $\varepsilon$, which improves to $1+\varepsilon$ for games with at least two players. We also achieve a PoA of 1 for series-parallel graphs and show that no protocol can achieve a PoA better than $\Omega(\sqrt{n})$ for multicast games. We then also consider convex cost functions and prove analogous results for series-parallel networks and multicast games, as well as a lower bound of $\Omega(n)$ for the PoA on directed acyclic graphs without the use of overcharging.
Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted … Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)
The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the … The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether there exist mechanisms that can implement fair outcomes, despite the agents' strategic behavior. A fundamental objective function used for measuring the fairness of an allocation is the geometric mean of the agents' values, known as the Nash social welfare (NSW). This objective function is maximized by widely known solution concepts such as Nash bargaining and the competitive equilibrium with equal incomes.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
A set of T indivisible goods has to be allocated to a set of n agents with additive utilities, in a way that is fair and efficient. A standard fairness … A set of T indivisible goods has to be allocated to a set of n agents with additive utilities, in a way that is fair and efficient. A standard fairness concept is envy-freeness, which requires that each agent prefers her own allocation over the allocation of any other agent. Even though envy is clearly unavoidable in this context - consider the case of a single indivisible good and two agents - providing approximately envy-free solutions is possible [3, 6]. Specifically, an allocation is envy-free up to one item (EF1) if for every pair of agents i and j, any envy i has for j can be eliminated by removing at most one good from j's bundle. Recently, Caragiannis et al. [3] show that the allocation that maximizes the product of the agents' utilities (with ties broken based on the number of agents with positive utility) is EF1 and Pareto efficient. The majority of the literature to date has focused on the case where the items are available to the algorithm upfront. In many situations of interest, however, items arrive online. A paradigmatic example is that of food banks [1, 5]. Food banks across the world receive food donations they must allocate; these donations are often perishable, and thus allocation decisions must be made quickly, and donations are typically leftovers, leading to uncertainty about items that will arrive in the future. Benadè et al. [2] study this problem, but focus only on fairness. They show that there exists a deterministic algorithm with vanishing envy, that is, the maximum pairwise envy (after all T items have been allocated) is sublinear in T , when the value vit of agent i for the t-th item is normalized to be in [0, 1]. Specifically, the envy is guaranteed to be at most O(√p T logT /n), and this guarantee is tight up to polylogarithmic factors. The same guarantee can also be achieved by the simple randomized algorithm that allocates each item to a uniformly random agent. These results hold even against an adaptive adversary that selects the value vit after seeing the allocation of the first t - 1 items. On the other hand, if we focus only on efficiency, our task is much easier. For example, we could simply allocate each item to the agent with the highest value. But, and this brings us to our interest here, the question remains: How should we make allocation decisions online in a way that is fair to the donation recipients, but also as efficient as possible?
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, … We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
In cost-sharing games with delays, a set of agents jointly uses a subset of resources. Each resource has a fixed cost that has to be shared by the players, and … In cost-sharing games with delays, a set of agents jointly uses a subset of resources. Each resource has a fixed cost that has to be shared by the players, and each agent has a nonshareable player-specific delay for each resource. A separable cost-sharing protocol determines cost shares that are budget-balanced, separable, and guarantee existence of pure Nash equilibria (PNE). We provide black-box reductions reducing the design of such a protocol to the design of an approximation algorithm for the underlying cost-minimization problem. In this way, we obtain separable cost-sharing protocols in matroid games, single-source connection games, and connection games on n-series-parallel graphs. All these reductions are efficiently computable - given an initial allocation profile, we obtain a cheaper profile and separable cost shares turning the profile into a PNE. Hence, in these domains, any approximation algorithm yields a separable cost-sharing protocol with price of stability bounded by the approximation factor.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini Sgouritsapp.2658 - 2672Chapter DOI:https://doi.org/10.1137/1.9781611975994.162PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute m goods to n agents in a “fair” manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is “envy-freeness up to any good” (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when n = 3. We show there is always a partition of the set of goods into n + 1 subsets (X1, …, Xn, P) where for i ϵ [n], Xi is the bundle allocated to agent i and the set P is unallocated (or donated to charity) such that we have: (1)envy-freeness up to any good,(2)no agent values P higher than her own bundle, and(3)fewer than n goods go to charity, i.e., |P| < n (typically m ≫ n). Our proof is constructive. When agents have additive valuations and |P| is large (i.e., when |P| is close to n), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. … We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. A set of $k$ terminal vertices or players needs to establish connectivity with the root. The social optimum is the minimum Steiner tree. We study situations where the designer has incomplete information about the input. We propose two different models, the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying graph metric, but the requested subset of the players is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model). In the adversarial model, the goal of the designer is to choose a single, universal cost-sharing protocol that has low Price of Anarchy (PoA) for all possible requested subsets of players. The main question we address is, to what extent can prior knowledge of the underlying graph metric help in the design? We first demonstrate that there exist classes of graphs where knowledge of the underlying graph metric can dramatically improve the performance of good network cost-sharing design. For outerplanar graph metrics, we provide a universal cost-sharing protocol with constant PoA, in contrast to protocols that, by ignoring the graph metric, cannot achieve PoA better than $\Omega(\log k)$. Then, in our main technical result, we show that there exist graph metrics for which knowing the underlying graph metric does not help and any universal protocol has PoA of $\Omega(\log k)$, which is tight. We attack this problem by developing new techniques that employ powerful tools from extremal combinatorics, and more specifically Ramsey theory in high-dimensional hypercubes. Then we switch to the stochastic model, where the players are activated according to some probability distribution that is known to the designer. We show that there exists a randomized ordered protocol that achieves constant PoA. If, further, each player is activated independently with some probability, by using standard derandomization techniques, we produce a deterministic ordered protocol that achieves constant PoA. We remark that the first result holds also for the black-box model, where the probabilities are not known to the designer, but she is allowed to draw independent (polynomially many) samples.
We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume … We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume that agents have additive utilities across elements. Our model generalizes existing frameworks such as fair public decision making and participatory budgeting. We study a groupwise fairness notion called the core, which generalizes well-studied notions of proportionality and Pareto efficiency, and requires that each subset of agents must receive an outcome that is fair relative to its size. In contrast to the case of divisible public goods (where fractional allocations are permitted), the core is not guaranteed to exist when allocating indivisible public goods. Our primary contributions are the notion of an additive approximation to the core (with a tiny multiplicative loss), and polynomial time algorithms that achieve a small additive approximation, where the additive factor is relative to the largest utility of an agent for an element. If the feasibility constraints define a matroid, we show an additive approximation of 2. A similar approach yields a constant additive bound when the feasibility constraints define a matching. For feasibility constraints defining an arbitrary packing polytope with mild restrictions, we show an additive guarantee that is logarithmic in the width of the polytope. Our algorithms are based on the convex program for maximizing the Nash social welfare, but differ significantly from previous work in how it is used. As far as we are aware, our work is the first to approximate the core in indivisible settings.
In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates … In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates are located in an underlying metric space. A voter has cost equal to her distance to the winning candidate. Ordinal social choice rules only have access to the ordinal preferences of the voters that are assumed to be consistent with the metric distances. Our goal is to design an ordinal social choice rule with minimum distortion, which is the worst-case ratio, over all consistent metrics, between the social cost of the rule and that of the optimal omniscient rule with knowledge of the underlying metric space. The distortion of the best deterministic social choice rule was known to be between 3 and 5. It had been conjectured that any rule that only looks at the weighted tournament graph on the candidates cannot have distortion better than 5. In our paper, we disprove it by presenting a weighted tournament rule with distortion of 4.236. We design this rule by generalizing the classic notion of uncovered sets, and further show that this class of rules cannot have distortion better than 4.236. We then propose a new voting rule, via an alternative generalization of uncovered sets. We show that if a candidate satisfying the criterion of this voting rule exists, then choosing such a candidate yields a distortion bound of 3, matching the lower bound. We present a combinatorial conjecture that implies distortion of $3$, and verify it for small numbers of candidates and voters by computer experiments. Using our framework, we also show that selecting any candidate guarantees distortion of at most 3 when the weighted tournament graph is cyclically symmetric.
In distortion-based analysis of social choice rules over metric spaces, voters and candidates are jointly embedded in a metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only … In distortion-based analysis of social choice rules over metric spaces, voters and candidates are jointly embedded in a metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only this ordinal (comparison) information, must select a candidate approximately minimizing the sum of distances from all voters to the chosen candidate. It is known that while the Copeland rule and related rules guarantee distortion at most 5, the distortion of many other standard voting rules, such as Plurality, Veto, or k-approval, grows unboundedly in the number n of candidates.An advantage of Plurality, Veto, or k-approval with small k is that they require less communication from the voters; all deterministic social choice rules known to achieve constant distortion require voters to transmit their complete rankings of all candidates. This motivates our study of the tradeoff between the distortion and the amount of communication in deterministic social choice rules.We show that any one-round deterministic voting mechanism in which each voter communicates only the candidates she ranks in a given set of k positions must have distortion at least 2n-k/k; we give a mechanism achieving an upper bound of O(n/k), which matches the lower bound up to a constant. For more general communication-bounded voting mechanisms, in which each voter communicates b bits of information about her ranking, we show a slightly weaker lower bound of Ω(n/b) on the distortion.For randomized mechanisms, Random Dictatorship achieves expected distortion strictly smaller than 3, almost matching a lower bound of 3 − 2/n for any randomized mechanism that only receives each voter's top choice. We close this gap, by giving a simple randomized social choice rule which only uses each voter's first choice, and achieves expected distortion 3 − 2/n.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal … We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner over $T$ periods, with adversarially-chosen normalized valuations in each period. Our goal is to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. On the positive side, we provide an online algorithm that achieves a competitive ratio of $O(\log N)$ and $O(\log T)$, but also a stronger competitive ratio of $O(\log k)$ in settings where the value of any agent for her most preferred item is no more than $k$ times her average value. We complement this by showing this bound is essentially tight: no online algorithm can achieve a competitive ratio of $O(\log^{1-\epsilon} N)$ or $O(\log^{1-\epsilon} T)$ for any constant $\epsilon>0$.
We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) … We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) for participating in the survey. Individuals must be compensated for their participation and are strategic agents, and so the payment scheme must incentivize truthful behavior. We derive optimal truthful mechanisms for this problem for the two goals of minimizing the variance of the estimator given a fixed budget, and minimizing the expected cost of the survey given a fixed variance goal.
Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report … Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report their true demand in response to prices. A line of research in economics, initiated by Hurwicz (1972), is devoted to understanding how such markets perform when agents are strategic about their demands. This is captured by the Walrasian Mechanism that proceeds by collecting reported demands, finding clearing prices in the reported market via an ascending price tatonnement procedure, and returns the resulting allocation. Similar mechanisms are used, for example, in the daily opening of the New York Stock Exchange and the call market for copper and gold in London.
In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the … In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O ( m log m ) and O ( n 3 ), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.
We consider the well-studied cake cutting problem in which the goal is to identify an envy-free allocation based on a minimal number of queries from the agents. The problem has … We consider the well-studied cake cutting problem in which the goal is to identify an envy-free allocation based on a minimal number of queries from the agents. The problem has attracted considerable attention within various branches of computer science, mathematics, and economics. Although, the elegant Selfridge-Conway envy-free protocol for three agents has been known since 1960, it has been a major open problem to obtain a bounded envy-free protocol for more than three agents. The problem has been termed the central open problem in cake cutting. We solve this problem by proposing a discrete and bounded envy-free protocol for four agents.
Click to increase image sizeClick to decrease image size Additional informationNotes on contributorsMan-Duen ChoiMan-Duen Choi was brought up in Hong Kong. He received his Ph.D. degree under the guidance of … Click to increase image sizeClick to decrease image size Additional informationNotes on contributorsMan-Duen ChoiMan-Duen Choi was brought up in Hong Kong. He received his Ph.D. degree under the guidance of Chandler Davis at Toronto. He taught at the University of California, Berkeley, from 1973 to 1976, and has worked since then at the University of Toronto. His research has been mainly in operator algebras, operator theory, and polynomial rings. He is particularly interested in examples/counterexamples and two-by-two matrix manipulations.
In this paper, we consider the following dynamic fair allocation problem: Given a sequence of job arrivals and departures, the goal is to maintain an approximately fair allocation of the … In this paper, we consider the following dynamic fair allocation problem: Given a sequence of job arrivals and departures, the goal is to maintain an approximately fair allocation of the resource against a target fair allocation policy, while minimizing the total number of disruptions, which is the number of times the allocation of any job is changed. We consider a rich class of fair allocation policies that significantly generalize those considered in previous work.
In the committee selection problem, we are given m candidates, and n voters. Candidates can have different weights. A committee is a subset of candidates, and its weight is the … In the committee selection problem, we are given m candidates, and n voters. Candidates can have different weights. A committee is a subset of candidates, and its weight is the sum of weights of its candidates. Each voter expresses an ordinal ranking over all possible committees. The only assumption we make on preferences is monotonicity: If S ⊆ S′ are two committees, then any voter weakly prefers S′ to S.
We consider the problem of designing distribution rules to share “welfare” (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a … We consider the problem of designing distribution rules to share “welfare” (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a (pure) Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however, a characterization of the space of distribution rules that guarantees the existence of a Nash equilibrium is unknown. Our work provides an exact characterization of this space for a specific class of scalable and separable games that includes a variety of applications such as facility location, routing, network formation, and coverage games. Given arbitrary local welfare functions 𝕨, we prove that a distribution rule guarantees equilibrium existence for all games (i.e., all possible sets of resources, agent action sets, etc.) if and only if it is equivalent to a generalized weighted Shapley value on some “ground” welfare functions 𝕨′, which can be distinct from 𝕨. However, if budget-balance is required in addition to the existence of a Nash equilibrium, then 𝕨′ must be the same as 𝕨. We also provide an alternate characterization of this space in terms of “generalized” marginal contributions, which is more appealing from the point of view of computational tractability. A possibly surprising consequence of our result is that, in order to guarantee equilibrium existence in all games with any fixed local welfare functions, it is necessary to work within the class of potential games.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial … We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
Ranking algorithms are deployed widely to order a set of items in applications such as search engines, news feeds, and recommendation systems. Recent studies, however, have shown that, left unchecked, … Ranking algorithms are deployed widely to order a set of items in applications such as search engines, news feeds, and recommendation systems. Recent studies, however, have shown that, left unchecked, the output of ranking algorithms can result in decreased diversity in the type of content presented, promote stereotypes, and polarize opinions. In order to address such issues, we study the following variant of the traditional ranking problem when, in addition, there are fairness or diversity constraints. Given a collection of items along with 1) the value of placing an item in a particular position in the ranking, 2) the collection of sensitive attributes (such as gender, race, political opinion) of each item and 3) a collection of fairness constraints that, for each k, bound the number of items with each attribute that are allowed to appear in the top k positions of the ranking, the goal is to output a ranking that maximizes the value with respect to the original rank quality metric while respecting the constraints. This problem encapsulates various well-studied problems related to bipartite and hypergraph matching as special cases and turns out to be hard to approximate even with simple constraints. Our main technical contributions are fast exact and approximation algorithms along with complementary hardness results that, together, come close to settling the approximability of this constrained ranking maximization problem. Unlike prior work on the approximability of constrained matching problems, our algorithm runs in linear time, even when the number of constraints is (polynomially) large, its approximation ratio does not depend on the number of constraints, and it produces solutions with small constraint violations. Our results rely on insights about the constrained matching problem when the objective function satisfies certain properties that appear in common ranking metrics such as discounted cumulative gain (DCG), Spearman's rho or Bradley-Terry, along with the nested structure of fairness constraints.
We consider the classic Facility Location problem on planar graphs (non-uniform, uncapacitated). Given an edge-weighted planar graph G, a set of clients C ⊆ V(G), a set of facilities F … We consider the classic Facility Location problem on planar graphs (non-uniform, uncapacitated). Given an edge-weighted planar graph G, a set of clients C ⊆ V(G), a set of facilities F ⊆ V(G), and opening costs open: F → R <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">≥0</sub> , the goal is to find a subset D of F that minimizes Σ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">cϵC</sub> min <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fϵD</sub> dist(c,f) + Σ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fϵD</sub> open(f). The Facility Location problem remains one of the most classic and fundamental optimization problem for which it is not known whether it admits a polynomial-time approximation scheme (PTAS) on planar graphs despite significant effort for obtaining one. We solve this open problem by giving an algorithm that for any ε>0, computes a solution of cost at most (1+ε) times the optimum in time n (2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">O(ε(-2) log(1/ε))</sup> ) .
We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization … We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization [12], use menus of infinite size [9], and may be computationally intractable [8]. This has sparked recent interest in finding simple mechanisms that obtain reasonable approximations to the optimal revenue [10, 15, 3]. In this work we attempt to find the optimal simple mechanism.
We consider K -Facility Location games, where n strategic agents report their locations in a metric space and a mechanism maps them to K facilities. The agents seek to minimize … We consider K -Facility Location games, where n strategic agents report their locations in a metric space and a mechanism maps them to K facilities. The agents seek to minimize their connection cost, namely the distance of their true location to the nearest facility, and may misreport their location. We are interested in deterministic mechanisms that are strategyproof, that is, ensure that no agent can benefit from misreporting her location, do not resort to monetary transfers, and achieve a bounded approximation ratio to the total connection cost of the agents (or to the L p norm of the connection costs, for some p ∈ [1, ∞) or for p = ∞). Our main result is an elegant characterization of deterministic strategyproof mechanisms with a bounded approximation ratio for 2-Facility Location on the line. In particular, we show that for instances with n ≥ 5 agents, any such mechanism either admits a unique dictator or always places the facilities at the leftmost and the rightmost location of the instance. As a corollary, we obtain that the best approximation ratio achievable by deterministic strategyproof mechanisms for the problem of locating 2 facilities on the line to minimize the total connection cost is precisely n -2. Another rather surprising consequence is that the Two-Extremes mechanism of Procaccia and Tennenholtz [2009] is the only deterministic anonymous strategyproof mechanism with a bounded approximation ratio for 2-Facility Location on the line. The proof of the characterization employs several new ideas and technical tools, which provide new insights into the behavior of deterministic strategyproof mechanisms for K -Facility Location games and may be of independent interest. Employing one of these tools, we show that for every K ≥ 3, there do not exist any deterministic anonymous strategyproof mechanisms with a bounded approximation ratio for K -Facility Location on the line, even for simple instances with K +1 agents. Moreover, building on the characterization for the line, we show that there do not exist any deterministic strategyproof mechanisms with a bounded approximation ratio for 2-Facility Location and instances with n ≥ 3 agents located in a star.
We consider a setting where an auctioneer sells a single item to n potential agents with interdependent values. That is, each agent has her own private signal, and the valuation … We consider a setting where an auctioneer sells a single item to n potential agents with interdependent values. That is, each agent has her own private signal, and the valuation of each agent is a known function of all n private signals. This captures settings such as valuations for oil drilling rights, broadcast rights, pieces of art, and many more.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximating Nash Social Welfare under Submodular Valuations through (Un)MatchingsJugal Garg, Pooja Kulkarni, and Rucha KulkarniJugal … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximating Nash Social Welfare under Submodular Valuations through (Un)MatchingsJugal Garg, Pooja Kulkarni, and Rucha KulkarniJugal Garg, Pooja Kulkarni, and Rucha Kulkarnipp.2673 - 2687Chapter DOI:https://doi.org/10.1137/1.9781611975994.163PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the problem of approximating maximum Nash social welfare (NSW) when allocating m indivisible items among n asymmetric agents with submodular valuations. The NSW is a well-established notion of fairness and efficiency, defined as the weighted geometric mean of agents' valuations. For special cases of the problem with symmetric agents and additive(-like) valuation functions, approximation algorithms have been designed using approaches customized for these specific settings, and they fail to extend to more general settings. Hence, no approximation algorithm with factor independent of m is known either for asymmetric agents with additive valuations or for symmetric agents beyond additive(-like) valuations. In this paper, we extend our understanding of the NSW problem to far more general settings. Our main contribution is two approximation algorithms for asymmetric agents with additive and submodular valuations respectively. Both algorithms are simple to understand and involve non-trivial modifications of a greedy repeated matchings approach. Allocations of high valued items are done separately by un-matching certain items and re-matching them, by processes that are different in both algorithms. We show that these approaches achieve approximation factors of O(n) and O(n log n) for additive and submodular case respectively, which is independent of the number of items. For additive valuations, our algorithm outputs an allocation that also achieves the fairness property of envy-free up to one item (EF1). Furthermore, we show that the NSW problem under submodular valuations is strictly harder than all currently known settings with an factor of the hardness of approximation, even for constantly many agents. For this case, we provide a different approximation algorithm that achieves a factor of , hence resolving it completely. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the … We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.
In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost … In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost (disutility) of each player is the completion time of its own job. In the game, players may follow selfish strategies to optimize their cost and therefore their behaviors do not necessarily lead the game to an equilibrium. Even in the case there is an equilibrium, its makespan might be much larger than the social optimum, and this inefficiency is measured by the price of anarchy -- the worst ratio between the makespan of an equilibrium and the optimum. Coordination mechanisms aim to reduce the price of anarchy by designing scheduling policies that specify how jobs assigned to a same machine are to be scheduled. Typically these policies define the schedule according to the processing times as announced by the jobs. One could wonder if there are policies that do not require this knowledge, and still provide a good price of anarchy. This would make the processing times be private information and avoid the problem of truthfulness. In this paper we study these so-called non-clairvoyant policies. In particular, we study the RANDOM policy that schedules the jobs in a random order without preemption, and the EQUI policy that schedules the jobs in parallel using time-multiplexing, assigning each job an equal fraction of CPU time.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs … We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs form a metric. We assume that these costs are unknown to the algorithms (and possibly even to the agents themselves), which means we cannot simply select the optimal alternative, i.e. the alternative that minimizes the total agent cost (or median agent cost). However, we do assume that the agents know their ordinal preferences that are induced by the metric space. We examine randomized social choice functions that require only this ordinal information and select an alternative that is good in expectation with respect to the costs from the metric. To quantify how good a randomized social choice function is, we bound the distortion, which is the worst-case ratio between the expected cost of the alternative selected and the cost of the optimal alternative. We provide new distortion bounds for a variety of randomized algorithms, for both general metrics and for important special cases. Our results show a sizable improvement in distortion over deterministic algorithms.
Distortion-based analysis has established itself as a fruitful framework for comparing voting mechanisms. m voters and n candidates are jointly embedded in an (unknown) metric space, and the voters submit … Distortion-based analysis has established itself as a fruitful framework for comparing voting mechanisms. m voters and n candidates are jointly embedded in an (unknown) metric space, and the voters submit rankings of candidates by non-decreasing distance from themselves. Based on the submitted rankings, the social choice rule chooses a winning candidate; the quality of the winner is the sum of the (unknown) distances to the voters. The rule's choice will in general be suboptimal, and the worst-case ratio between the cost of its chosen candidate and the optimal candidate is called the rule's distortion. It was shown in prior work that every deterministic rule has distortion at least 3, while the Copeland rule and related rules guarantee distortion at most 5; a very recent result gave a rule with distortion 2 + √5 ≈ 4.236.We provide a framework based on LP-duality and flow interpretations of the dual which provides a simpler and more unified way for proving upper bounds on the distortion of social choice rules. We illustrate the utility of this approach with three examples. First, we show that the Ranked Pairs and Schulze rules have distortion Θ(√n). Second, we give a fairly simple proof of a strong generalization of the upper bound of 5 on the distortion of Copeland, to social choice rules with short paths from the winning candidate to the optimal candidate in generalized weak preference graphs. A special case of this result recovers the recent 2 + √5 guarantee. Finally, our framework naturally suggests a combinatorial rule that is a strong candidate for achieving distortion 3, which had also been proposed in recent work. We prove that the distortion bound of 3 would follow from any of three combinatorial conjectures we formulate.
In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n &gt;&gt; … In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n &gt;&gt; k candidates, maximizing utility (i.e., select the "best" candidates) subject to group fairness criteria. Our ranked group fairness definition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above or indistinguishable from a given minimum. Utility is operationalized in two ways: (i) every candidate included in the top-$k$ should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k, the more qualified candidate should be ranked above. An efficient algorithm is presented for producing the Fair Top-k Ranking, and tested experimentally on existing datasets as well as new datasets released with this paper, showing that our approach yields small distortions with respect to rankings that maximize utility without considering fairness criteria. To the best of our knowledge, this is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.
As is well known, many classes of markets have efficient equilibria, but this depends on agents being non-strategic, i.e. that they declare their true demands when offered goods at particular … As is well known, many classes of markets have efficient equilibria, but this depends on agents being non-strategic, i.e. that they declare their true demands when offered goods at particular prices, or in other words, that they are price-takers. An important question is how much the equilibria degrade in the face of strategic behavior, i.e. what is the Price of Anarchy (PoA) of the market viewed as a mechanism?
Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final … Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.