Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (49)

We study the problem of fair cake-cutting where each agent receives a connected piece of the cake. A division of the cake is deemed fair if it is equitable, which … We study the problem of fair cake-cutting where each agent receives a connected piece of the cake. A division of the cake is deemed fair if it is equitable, which means that all agents derive the same value from their assigned piece. Prior work has established the existence of a connected equitable division for agents with nonnegative valuations using various techniques. We provide a simple proof of this result using Sperner's lemma. Our proof extends known existence results for connected equitable divisions to significantly more general classes of valuations, including nonnegative valuations with externalities, as well as several interesting subclasses of general (possibly negative) valuations.
We study the problem of maximizing Nash social welfare, which is the geometric mean of agents' utilities, in two well-known models. The first model involves one-sided preferences, where a set … We study the problem of maximizing Nash social welfare, which is the geometric mean of agents' utilities, in two well-known models. The first model involves one-sided preferences, where a set of indivisible items is allocated among a group of agents (commonly studied in fair division). The second model deals with two-sided preferences, where a set of workers and firms, each having numerical valuations for the other side, are matched with each other (commonly studied in matching-under-preferences literature). We study these models under capacity constraints, which restrict the number of items (respectively, workers) that an agent (respectively, a firm) can receive. We develop constant-factor approximation algorithms for both problems under a broad class of valuations. Specifically, our main results are the following: (a) For any $\epsilon > 0$, a $(6+\epsilon)$-approximation algorithm for the one-sided problem when agents have submodular valuations, and (b) a $1.33$-approximation algorithm for the two-sided problem when the firms have subadditive valuations. The former result provides the first constant-factor approximation algorithm for Nash welfare in the one-sided problem with submodular valuations and capacities, while the latter result improves upon an existing $\sqrt{OPT}$-approximation algorithm for additive valuations. Our result for the two-sided setting also establishes a computational separation between the Nash and utilitarian welfare objectives. We also complement our algorithms with hardness-of-approximation results.
The maximum Nash social welfare (NSW)---which maximizes the geometric mean of agents' utilities---is a fundamental solution concept with remarkable fairness and efficiency guarantees. The computational aspects of NSW have been … The maximum Nash social welfare (NSW)---which maximizes the geometric mean of agents' utilities---is a fundamental solution concept with remarkable fairness and efficiency guarantees. The computational aspects of NSW have been extensively studied for *one-sided* preferences where a set of agents have preferences over a set of resources. Our work deviates from this trend and studies NSW maximization for *two-sided* preferences, wherein a set of workers and firms, each having a cardinal valuation function, are matched with each other. We provide a systematic study of the computational complexity of maximizing NSW for many-to-one matchings under two-sided preferences. Our main negative result is that maximizing NSW is NP-hard even in a highly restricted setting where each firm has capacity 2, all valuations are in the range {0,1,2}, and each agent positively values at most three other agents. In search of positive results, we develop approximation algorithms as well as parameterized algorithms in terms of natural parameters such as the number of workers, the number of firms, and the firms' capacities. We also provide algorithms for restricted domains such as symmetric binary valuations and bounded degree instances.
We study the problem of capacity modification in the many-to-one stable matching of workers and firms. Our goal is to systematically study how the set of stable matchings changes when … We study the problem of capacity modification in the many-to-one stable matching of workers and firms. Our goal is to systematically study how the set of stable matchings changes when some seats are added to or removed from the firms. We make three main contributions: First, we examine whether firms and workers can improve or worsen upon changing the capacities under worker-proposing and firm-proposing deferred acceptance algorithms. Second, we study the computational problem of adding or removing seats to either match a fixed worker-firm pair in some stable matching or make a fixed matching stable with respect to the modified problem. We develop polynomial-time algorithms for these problems when only the overall change in the firms' capacities is restricted, and show NP-hardness when there are additional constraints for individual firms. Lastly, we compare capacity modification with the classical model of preference manipulation by firms and identify scenarios under which one mode of manipulation outperforms the other. We find that a threshold on a given firm's capacity, which we call its peak, crucially determines the effectiveness of different manipulation actions.
We study the problem of fairly assigning a set of discrete tasks (or chores) among a set of agents with additive valuations. Each chore is associated with a start and … We study the problem of fairly assigning a set of discrete tasks (or chores) among a set of agents with additive valuations. Each chore is associated with a start and finish time, and each agent can perform at most one chore at any given time. The goal is to find a fair and efficient schedule of the chores, where fairness pertains to satisfying envy-freeness up to one chore (EF1) and efficiency pertains to maximality (i.e., no unallocated chore can be feasibly assigned to any agent). Our main result is a polynomial-time algorithm for computing an EF1 and maximal schedule for two agents under monotone valuations when the conflict constraints constitute an arbitrary interval graph. The algorithm uses a coloring technique in interval graphs that may be of independent interest. For an arbitrary number of agents, we provide an algorithm for finding a fair schedule under identical dichotomous valuations when the constraints constitute a path graph. We also show that stronger fairness and efficiency properties, including envy-freeness up to any chore (EFX) along with maximality and EF1 along with Pareto optimality, cannot be achieved.
Social commerce platforms are emerging businesses where producers sell products through re-sellers who advertise the products to other customers in their social network. Due to the increasing popularity of this … Social commerce platforms are emerging businesses where producers sell products through re-sellers who advertise the products to other customers in their social network. Due to the increasing popularity of this business model, thousands of small producers and re-sellers are starting to depend on these platforms for their livelihood; thus, it is important to provide fair earning opportunities to them. The enormous product space in such platforms prohibits manual search, and motivates the need for recommendation algorithms to effectively allocate product exposure and, consequently, earning opportunities. In this work, we focus on the fairness of such allocations in social commerce platforms and formulate the problem of assigning products to re-sellers as a fair division problem with indivisible items under two-sided cardinality constraints, wherein each product must be given to at least a certain number of re-sellers and each re-seller must get a certain number of products.
Consider the problem of allocating indivisible goods among agents with additive valuations, where monetary payments are not allowed. When randomization is allowed, it is possible to achieve compelling notions of … Consider the problem of allocating indivisible goods among agents with additive valuations, where monetary payments are not allowed. When randomization is allowed, it is possible to achieve compelling notions of fairness such as EV, which states that no agent should prefer any other agent's allocation to their own. When allocations must be deterministic, achieving exact fairness is impossible but approximate notions such as EV up to one good can be guaranteed. In “Best of Both Worlds: Ex Ante and Ex Post Fairness in Resource Allocation,” H. Aziz, R. Freeman, N. Shah, and R. Vaish ask whether it is possible to achieve both types of guarantees simultaneously. More specifically, they ask whether there exists a probability distribution over deterministic allocations such that every deterministic allocation is envy-free up to one good and the distribution is exactly envy-free in expectation. The main result of the paper answers this question in the affirmative, showing that ex ante and ex post fairness need not be in conflict.
The classical house allocation problem involves assigning $n$ houses (or items) to $n$ agents according to their preferences. A key criterion in such problems is satisfying some fairness constraints such … The classical house allocation problem involves assigning $n$ houses (or items) to $n$ agents according to their preferences. A key criterion in such problems is satisfying some fairness constraints such as envy-freeness. We consider a generalization of this problem wherein the agents are placed along the vertices of a graph (corresponding to a social network), and each agent can only experience envy towards its neighbors. Our goal is to minimize the aggregate envy among the agents as a natural fairness objective, i.e., the sum of all pairwise envy values over all edges in a social graph. When agents have identical and evenly-spaced valuations, our problem reduces to the well-studied problem of linear arrangements. For identical valuations with possibly uneven spacing, we show a number of deep and surprising ways in which our setting is a departure from this classical problem. More broadly, we contribute several structural and computational results for various classes of graphs, including NP-hardness results for disjoint unions of paths, cycles, stars, or cliques, and fixed-parameter tractable (and, in some cases, polynomial-time) algorithms for paths, cycles, stars, cliques, and their disjoint unions. Additionally, a conceptual contribution of our work is the formulation of a structural property for disconnected graphs that we call separability which results in efficient parameterized algorithms for finding optimal allocations.
In fair division problems, the notion of price of fairness measures the loss in welfare due to a fairness constraint. Prior work on the price of fairness has focused primarily … In fair division problems, the notion of price of fairness measures the loss in welfare due to a fairness constraint. Prior work on the price of fairness has focused primarily on envy-freeness up to one good (EF1) as the fairness constraint, and on the utilitarian and egalitarian welfare measures. Our work instead focuses on the price of equitability up to one good (EQ1) (which we term price of equity) and considers the broad class of generalized $p$-mean welfare measures (which includes utilitarian, egalitarian, and Nash welfare as special cases). We derive fine-grained bounds on the price of equity in terms of the number of agent types (i.e., the maximum number of agents with distinct valuations), which allows us to identify scenarios where the existing bounds in terms of the number of agents are overly pessimistic. Our work focuses on the setting with binary additive valuations, and obtains upper and lower bounds on the price of equity for $p$-mean welfare for all $p \leqslant 1$. For any fixed $p$, our bounds are tight up to constant factors. A useful insight of our work is to identify the structure of allocations that underlie the upper (respectively, the lower) bounds simultaneously for all $p$-mean welfare measures, thus providing a unified structural understanding of price of fairness in this setting. This structural understanding, in fact, extends to the more general class of binary submodular (or matroid rank) valuations. We also show that, unlike binary additive valuations, for binary submodular valuations the number of agent types does not provide bounds on the price of equity.
The Graphical House Allocation problem asks: how can $n$ houses (each with a fixed non-negative value) be assigned to the vertices of an undirected graph $G$, so as to minimize … The Graphical House Allocation problem asks: how can $n$ houses (each with a fixed non-negative value) be assigned to the vertices of an undirected graph $G$, so as to minimize the "aggregate local envy", i.e., the sum of absolute differences along the edges of $G$? This problem generalizes the classical Minimum Linear Arrangement problem, as well as the well-known House Allocation Problem from Economics, the latter of which has notable practical applications in organ exchanges. Recent work has studied the computational aspects of Graphical House Allocation and observed that the problem is NP-hard and inapproximable even on particularly simple classes of graphs, such as vertex disjoint unions of paths. However, the dependence of any approximations on the structural properties of the underlying graph had not been studied. In this work, we give a complete characterization of the approximability of the Graphical House Allocation problem. We present algorithms to approximate the optimal envy on general graphs, trees, planar graphs, bounded-degree graphs, bounded-degree planar graphs, and bounded-degree trees. For each of these graph classes, we then prove matching lower bounds, showing that in each case, no significant improvement can be attained unless P = NP. We also present general approximation ratios as a function of structural parameters of the underlying graph, such as treewidth; these match the aforementioned tight upper bounds in general, and are significantly better approximations for many natural subclasses of graphs. Finally, we present constant factor approximation schemes for the special classes of complete binary trees and random graphs.
The maximum Nash social welfare (NSW) -- which maximizes the geometric mean of agents' utilities -- is a fundamental solution concept with remarkable fairness and efficiency guarantees. The computational aspects … The maximum Nash social welfare (NSW) -- which maximizes the geometric mean of agents' utilities -- is a fundamental solution concept with remarkable fairness and efficiency guarantees. The computational aspects of NSW have been extensively studied for one-sided preferences where a set of agents have preferences over a set of resources. Our work deviates from this trend and studies NSW maximization for two-sided preferences, wherein a set of workers and firms, each having a cardinal valuation function, are matched with each other. We provide a systematic study of the computational complexity of maximizing NSW for many-to-one matchings under two-sided preferences. Our main negative result is that maximizing NSW is NP-hard even in a highly restricted setting where each firm has capacity 2, all valuations are in the range {0,1,2}, and each agent positively values at most three other agents. In search of positive results, we develop approximation algorithms as well as parameterized algorithms in terms of natural parameters such as the number of workers, the number of firms, and the firms' capacities. We also provide algorithms for restricted domains such as symmetric binary valuations and bounded degree instances.
Strategic behavior in two-sided matching markets has been traditionally studied in a "one-sided" manipulation setting where the agent who misreports is also the intended beneficiary. Our work investigates "two-sided" manipulation … Strategic behavior in two-sided matching markets has been traditionally studied in a "one-sided" manipulation setting where the agent who misreports is also the intended beneficiary. Our work investigates "two-sided" manipulation of the deferred acceptance algorithm where the misreporting agent and the manipulator (or beneficiary) are on different sides. Specifically, we generalize the recently proposed accomplice manipulation model (where a man misreports on behalf of a woman) along two complementary dimensions: (a) the two for one model, with a pair of misreporting agents (man and woman) and a single beneficiary (the misreporting woman), and (b) the one for all model, with one misreporting agent (man) and a coalition of beneficiaries (all women). Our main contribution is to develop polynomial-time algorithms for finding an optimal manipulation in both settings. We obtain these results despite the fact that an optimal one for all strategy fails to be inconspicuous, while it is unclear whether an optimal two for one strategy satisfies the inconspicuousness property. We also study the conditions under which stability of the resulting matching is preserved. Experimentally, we show that two-sided manipulations are more frequently available and offer better quality matches than their one-sided counterparts.
We study fair allocation of indivisible goods and chores among agents with \emph{lexicographic} preferences -- a subclass of additive valuations. In sharp contrast to the goods-only setting, we show that … We study fair allocation of indivisible goods and chores among agents with \emph{lexicographic} preferences -- a subclass of additive valuations. In sharp contrast to the goods-only setting, we show that an allocation satisfying \emph{envy-freeness up to any item} (EFX) could fail to exist for a mixture of \emph{objective} goods and chores. To our knowledge, this negative result provides the \emph{first} counterexample for EFX over (any subdomain of) additive valuations. To complement this non-existence result, we identify a class of instances with (possibly subjective) mixed items where an EFX and Pareto optimal allocation always exists and can be efficiently computed. When the fairness requirement is relaxed to \emph{maximin share} (MMS), we show positive existence and computation for \emph{any} mixed instance. More broadly, our work examines the existence and computation of fair and efficient allocations both for mixed items as well as chores-only instances, and highlights the additional difficulty of these problems vis-{\`a}-vis their goods-only counterparts.
Fair division provides a rich computational and mathematical framework for the allocation of indivisible goods, which has given rise to numerous fairness concepts and their relaxations. In recent years, much … Fair division provides a rich computational and mathematical framework for the allocation of indivisible goods, which has given rise to numerous fairness concepts and their relaxations. In recent years, much attention has been given to theoretical and computational aspects of various fairness concepts. Nonetheless, the choice of which fairness concept is in practice perceived to be fairer by individuals is not well understood. We consider two conceptually different relaxations of envy-freeness and investigate how individuals perceive the induced allocations as fair. In particular, we examine a well-studied relaxation of envy-freeness up to one good (EF1) which is based on counterfactual thinking that any pairwise envy can be eliminated by the hypothetical removal of a single good from the envied agent's bundle. In contrast, a recently proposed epistemic notion, namely envy-freeness up to $k$ hidden goods (HEF-$k$), provides a relaxation by hiding information about a small subset of $k$ goods. Through various crowdsourcing experiments, we empirically demonstrate that allocations achieved by withholding information are perceived to be fairer compared to two variants of EF1.
The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is … The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is susceptible to strategic misreporting of preferences by the agents on the other side. We study a novel model of strategic behavior under the deferred acceptance algorithm: manipulation through an accomplice. Here, an agent on the proposed-to side (say, a woman) partners with an agent on the proposing side---an accomplice---to manipulate on her behalf (possibly at the expense of worsening his match). We show that the optimal manipulation strategy for an accomplice comprises of promoting exactly one woman in his true list (i.e., an inconspicuous manipulation). This structural result immediately gives a polynomial-time algorithm for computing an optimal accomplice manipulation. We also study the conditions under which the manipulated matching is stable with respect to the true preferences. Our experimental results show that accomplice manipulation outperforms self manipulation both in terms of the frequency of occurrence as well as the quality of matched partners.
Given a graph $G = (V,E)$ where every vertex has weak preferences over its neighbors, we consider the problem of computing an optimal matching as per agent preferences. The classical … Given a graph $G = (V,E)$ where every vertex has weak preferences over its neighbors, we consider the problem of computing an optimal matching as per agent preferences. The classical notion of optimality in this setting is stability. However stable matchings, and more generally, popular matchings need not exist when $G$ is non-bipartite. Unlike popular matchings, Copeland winners always exist in any voting instance -- we study the complexity of computing a matching that is a Copeland winner and show there is no polynomial-time algorithm for this problem unless $\mathsf{P} = \mathsf{NP}$. We introduce a relaxation of both popular matchings and Copeland winners called semi-Copeland winners. These are matchings with Copeland score at least $\mu/2$, where $\mu$ is the total number of matchings in $G$; the maximum possible Copeland score is $(\mu-1/2)$. We show a fully polynomial-time randomized approximation scheme to compute a matching with Copeland score at least $\mu/2\cdot(1-\varepsilon)$ for any $\varepsilon > 0$.
Envy-freeness up to any good (EFX) provides a strong and intuitive guarantee of fairness in the allocation of indivisible goods. But whether such allocations always exist or whether they can … Envy-freeness up to any good (EFX) provides a strong and intuitive guarantee of fairness in the allocation of indivisible goods. But whether such allocations always exist or whether they can be efficiently computed remains an important open question. We study the existence and computation of EFX in conjunction with various other economic properties under lexicographic preferences--a well-studied preference restriction model in artificial intelligence and economics. In sharp contrast to the known results for additive valuations, we not only prove the existence of EFX and Pareto optimal allocations, but in fact provide an algorithmic characterization of these two properties. We also characterize the mechanisms that are, in addition, strategyproof, non-bossy, and neutral. When the efficiency notion is strengthened to rank-maximality, we obtain non-existence and computational hardness results, and show that tractability can be restored when EFX is relaxed to another well-studied fairness notion called maximin share guarantee (MMS).
We study a model of proxy voting where the candidates, voters, and proxies are all located on the real line, and instead of voting directly, each voter delegates its vote … We study a model of proxy voting where the candidates, voters, and proxies are all located on the real line, and instead of voting directly, each voter delegates its vote to the closest proxy. The goal is to find a set of proxies that is theta-representative, which entails that for any voter located anywhere on the line, its favorite candidate is within a distance theta of the favorite candidate of its closest proxy. This property guarantees a strong form of representation as the set of voters is not required to be fixed in advance, or even be finite. We show that for candidates located on a line, an optimal proxy arrangement can be computed in polynomial time. Moreover, we provide upper and lower bounds on the number of proxies required to form a theta-representative set, thus showing that a relatively small number of proxies is enough to capture the preferences of any set of voters. An additional beneficial property of a theta-representative proxy arrangement is that for strict-Condorcet voting rules, the outcome of proxy voting is similarly close to the outcome of direct voting.
We study fair resource allocation under a connectedness constraint wherein a set of indivisible items are arranged on a path and only connected subsets of items may be allocated to … We study fair resource allocation under a connectedness constraint wherein a set of indivisible items are arranged on a path and only connected subsets of items may be allocated to the agents. An allocation is deemed fair if it satisfies equitability up to one good (EQ1), which requires that agents' utilities are approximately equal. We show that achieving EQ1 in conjunction with well-studied measures of economic efficiency (such as Pareto optimality, non-wastefulness, maximum egalitarian or utilitarian welfare) is computationally hard even for binary additive valuations. On the algorithmic side, we show that by relaxing the efficiency requirement, a connected EQ1 allocation can be computed in polynomial time for any given ordering of agents, even for general monotone valuations. Interestingly, the allocation computed by our algorithm has the highest egalitarian welfare among all allocations consistent with the given ordering. On the other hand, if efficiency is required, then tractability can still be achieved for binary additive valuations with interval structure. On our way, we strengthen some of the existing results in the literature for other fairness notions such as envy-freeness up to one good (EF1), and also provide novel results for negatively-valued items or chores.
Given a graph $G = (V,E)$ where every vertex has a weak ranking over its neighbors, we consider the problem of computing an optimal matching as per agent preferences. Classical … Given a graph $G = (V,E)$ where every vertex has a weak ranking over its neighbors, we consider the problem of computing an optimal matching as per agent preferences. Classical notions of optimality such as stability and its relaxation popularity could fail to exist when $G$ is non-bipartite. In light of the non-existence of a popular matching, we consider its relaxations that satisfy universal existence. We find a positive answer in the form of semi-popularity. A matching $M$ is semi-popular if for a majority of the matchings $N$ in $G$, $M$ does not lose a head-to-head election against $N$. We show that a semi-popular matching always exists in any graph $G$ and complement this existence result with a fully polynomial-time randomized approximation scheme (FPRAS). A special subclass of semi-popular matchings is the set of Copeland winners -- the notion of Copeland winner is classical in social choice theory and a Copeland winner always exists in any voting instance. We study the complexity of computing a matching that is a Copeland winner and show there is no polynomial-time algorithm for this problem unless $\mathsf{P} = \mathsf{NP}$.
We study the fair allocation of undesirable indivisible items, or chores. While the case of desirable indivisible items (or goods) is extensively studied, with many results known for different notions … We study the fair allocation of undesirable indivisible items, or chores. While the case of desirable indivisible items (or goods) is extensively studied, with many results known for different notions of fairness, less is known about the fair division of chores. We study the envy-free division of chores, and make three contributions. First, we show that determining the existence of an envy-free allocation is NP-complete, even in the simple case when agents have binary additive valuations. Second, we provide a polynomial-time algorithm for computing an allocation that satisfies envy-freeness up to one chore (EF1), correcting an existing proof in the literature. A straightforward modification of our algorithm can be used to compute an EF1 allocation for doubly monotone instances (wherein each agent can partition the set of items into objective goods and objective chores). Our third result applies to a mixed resources model consisting of indivisible items and a divisible, undesirable heterogeneous resource (i.e., a bad cake). We show that there always exists an allocation that satisfies envy-freeness for mixed resources (EFM) in this setting, complementing a recent result of Bei et al. (Art. Int. 2021) for indivisible goods and divisible cake.
The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is … The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is susceptible to strategic misreporting of preferences by the agents on the other side. We study a novel model of strategic behavior under the deferred acceptance algorithm: manipulation through an accomplice. Here, an agent on the proposed-to side (say, a woman) partners with an agent on the proposing side -- an accomplice -- to manipulate on her behalf (possibly at the expense of worsening his match). We show that the optimal manipulation strategy for an accomplice comprises of promoting exactly one woman in his true list (i.e., an inconspicuous manipulation). This structural result immediately gives a polynomial-time algorithm for computing an optimal accomplice manipulation. We also study the conditions under which the manipulated matching is stable with respect to the true preferences. Our experimental results show that accomplice manipulation outperforms self manipulation both in terms of the frequency of occurrence as well as the quality of matched partners.
We study the problem of allocating indivisible goods among agents. When randomization is allowed, it is possible to achieve compelling fairness guarantees such as envy-freeness (Foley, 1967), which states that … We study the problem of allocating indivisible goods among agents. When randomization is allowed, it is possible to achieve compelling fairness guarantees such as envy-freeness (Foley, 1967), which states that no agent should (in expectation) prefer any other agent's allocation to her own. For instance, we can simply allocate each good, independently of the other goods, to an agent chosen uniformly at random. However, while this scheme is fair ex-ante, it may produce outcomes that are very unfair ex-post, such as by chance assigning all the goods to a single agent. On the other hand, in the absence of randomization, some amount of ex-post unfairness is unavoidable. Nevertheless, it is possible to guarantee relaxations of envy-freeness that bound the maximum level of envy. One popular relaxation is envy-freeness up to one good (Lipton et al., 2004; Budish, 2011), which requires that the envy of any agent toward another agent can be removed by the elimination of at most one good from the envied agent's bundle.
Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, … Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, each pair of agents might require the (hypothetical) removal of a different good, resulting in a weak aggregate guarantee. We study allocations that are nearly envy-free in aggregate, and define a novel fairness notion based on information withholding. Under this notion, an agent can withhold (or hide) some of the goods in its bundle and reveal the remaining goods to the other agents. We observe that in practice, envy-freeness can be achieved by withholding only a small number of goods overall. We show that finding allocations that withhold an optimal number of goods is computationally hard even for highly restricted classes of valuations. In contrast to the worst-case results, our experiments on synthetic and real-world preference data show that existing algorithms for finding EF1 allocations withhold a close-to-optimal amount of information.
We study fair allocation of indivisible chores (i.e., items with non-positive value) among agents with additive valuations. An allocation is deemed fair if it is (approximately) equitable, which means that … We study fair allocation of indivisible chores (i.e., items with non-positive value) among agents with additive valuations. An allocation is deemed fair if it is (approximately) equitable, which means that the disutilities of the agents are (approximately) equal. Our main theoretical contribution is to show that there always exists an allocation that is simultaneously equitable up to one chore (EQ1) and Pareto optimal (PO), and to provide a pseudopolynomial-time algorithm for computing such an allocation. In addition, we observe that the Leximin solution---which is known to satisfy a strong form of approximate equitability in the goods setting---fails to satisfy even EQ1 for chores. It does, however, satisfy a novel fairness notion that we call equitability up to any duplicated chore. Our experiments on synthetic as well as real-world data obtained from the Spliddit website reveal that the algorithms considered in our work satisfy approximate fairness and efficiency properties significantly more often than the algorithm currently deployed on Spliddit.
We study a model of proxy voting where the candidates, voters, and proxies are all located on the real line, and instead of voting directly, each voter delegates its vote … We study a model of proxy voting where the candidates, voters, and proxies are all located on the real line, and instead of voting directly, each voter delegates its vote to the closest proxy. The goal is to find a set of proxies that is $\theta$-representative, which entails that for any voter located anywhere on the line, its favorite candidate is within a distance $\theta$ of the favorite candidate of its closest proxy. This property guarantees a strong form of representation as the set of voters is not required to be fixed in advance, or even be finite. We show that for candidates located on a line, an optimal proxy arrangement can be computed in polynomial time. Moreover, we provide upper and lower bounds on the number of proxies required to form a $\theta$-representative set, thus showing that a relatively small number of proxies is enough to capture the preferences of any set of voters. An additional beneficial property of a $\theta$-representative proxy arrangement is that for strict-Condorcet voting rules, the outcome of proxy voting is similarly close to the outcome of direct voting.
Envy-freeness up to any good (EFX) provides a strong and intuitive guarantee of fairness in the allocation of indivisible goods. But whether such allocations always exist or whether they can … Envy-freeness up to any good (EFX) provides a strong and intuitive guarantee of fairness in the allocation of indivisible goods. But whether such allocations always exist or whether they can be efficiently computed remains an important open question. We study the existence and computation of EFX in conjunction with various other economic properties under lexicographic preferences--a well-studied preference model in artificial intelligence and economics. In sharp contrast to the known results for additive valuations, we not only prove the existence of EFX and Pareto optimal allocations, but in fact provide an algorithmic characterization of these two properties. We also characterize the mechanisms that are, in addition, strategyproof, non-bossy, and neutral. When the efficiency notion is strengthened to rank-maximality, we obtain non-existence and computational hardness results, and show that tractability can be restored when EFX is relaxed to another well-studied fairness notion called maximin share guarantee (MMS).
The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is … The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is susceptible to strategic misreporting of preferences by the agents on the other side. We study a novel model of strategic behavior under the deferred acceptance algorithm: manipulation through an accomplice. Here, an agent on the proposed-to side (say, a woman) partners with an agent on the proposing side -- an accomplice -- to manipulate on her behalf (possibly at the expense of worsening his match). We show that the optimal manipulation strategy for an accomplice comprises of promoting exactly one woman in his true list (i.e., an inconspicuous manipulation). This structural result immediately gives a polynomial-time algorithm for computing an optimal accomplice manipulation. We also study the conditions under which the manipulated matching is stable with respect to the true preferences. Our experimental results show that accomplice manipulation outperforms self manipulation both in terms of the frequency of occurrence as well as the quality of matched partners.
We study the problem of allocating indivisible goods among agents with additive valuations. When randomization is allowed, it is possible to achieve compelling notions of fairness such as envy-freeness, which … We study the problem of allocating indivisible goods among agents with additive valuations. When randomization is allowed, it is possible to achieve compelling notions of fairness such as envy-freeness, which states that no agent should prefer any other agent's allocation to her own. When allocations must be deterministic, achieving exact fairness is impossible but approximate notions such as envy-freeness up to one good can be guaranteed. Our goal in this work is to achieve both simultaneously, by constructing a randomized allocation that is exactly fair ex-ante and approximately fair ex-post. The key question we address is whether ex-ante envy-freeness can be achieved in combination with ex-post envy-freeness up to one good. We settle this positively by designing an efficient algorithm that achieves both properties simultaneously. If we additionally require economic efficiency, we obtain an impossibility result. However, we show that economic efficiency and ex-ante envy-freeness can be simultaneously achieved if we slightly relax our ex-post fairness guarantee. On our way, we characterize the well-known Maximum Nash Welfare allocation rule in terms of a recently introduced fairness guarantee that applies to groups of agents, not just individuals.
We study the fair allocation of undesirable indivisible items, or chores. While the case of desirable indivisible items (or goods) is extensively studied, with many results known for different notions … We study the fair allocation of undesirable indivisible items, or chores. While the case of desirable indivisible items (or goods) is extensively studied, with many results known for different notions of fairness, less is known about the fair division of chores. We study the envy-free division of chores, and make three contributions. First, we show that determining the existence of an envy-free allocation is NP-complete, even in the simple case when agents have binary additive valuations. Second, we provide a polynomial-time algorithm for computing an allocation that satisfies envy-freeness up to one chore (EF1), correcting an existing proof in the literature. A straightforward modification of our algorithm can be used to compute an EF1 allocation for doubly monotone instances (wherein each agent can partition the set of items into objective goods and objective chores). Our third result applies to a mixed resources model consisting of indivisible items and a divisible, undesirable heterogeneous resource (i.e., a bad cake). We show that there always exists an allocation that satisfies envy-freeness for mixed resources (EFM) in this setting, complementing a recent result of Bei et al. (Art. Int. 2021) for indivisible goods and divisible cake.
Consider the following problem faced by an online voting platform: A user is provided with a list of alternatives, and is asked to rank them in order of preference using … Consider the following problem faced by an online voting platform: A user is provided with a list of alternatives, and is asked to rank them in order of preference using only drag-and-drop operations. The platform's goal is to recommend an initial ranking that minimizes the time spent by the user in arriving at her desired ranking. We develop the first optimization framework to address this problem, and make theoretical as well as practical contributions. On the practical side, our experiments on the Amazon Mechanical Turk platform provide two interesting insights about user behavior: First, that users' ranking strategies closely resemble selection or insertion sort, and second, that the time taken for a drag-and-drop operation depends linearly on the number of positions moved. These insights directly motivate our theoretical model of the optimization problem. We show that computing an optimal recommendation is NP-hard, and provide exact and approximation algorithms for a variety of special cases of the problem. Experimental evaluation on MTurk shows that, compared to a random recommendation strategy, the proposed approach reduces the (average) time-to-rank by up to 50%.
In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While … In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While prior work has studied (approximate) equitability in isolation, we consider equitability in conjunction with other well-studied notions of fairness and economic efficiency. We show that the Leximin algorithm produces an allocation that satisfies equitability up to any good and Pareto optimality. We also give a novel algorithm that guarantees Pareto optimality and equitability up to one good in pseudopolynomial time. Our experiments on real-world preference data reveal that approximate envy-freeness, approximate equitability, and Pareto optimality can often be achieved simultaneously.
We study a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). After observing that, in … We study a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). After observing that, in this cardinal setting, stable fractional matchings can have much higher social welfare than stable integral ones, our goal is to understand the computational complexity of finding an optimal (i.e., welfare-maximizing) or nearly-optimal stable fractional matching. We present simple approximation algorithms for this problem with weak welfare guarantees and, rather unexpectedly, we furthermore show that achieving better approximations is hard. This computational hardness persists even for approximate stability. To the best of our knowledge, these are the first computational complexity results for stable fractional matchings. En route to these results, we provide a number of structural observations.
We study a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). After observing that, in … We study a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). After observing that, in this cardinal setting, stable fractional matchings can have much higher social welfare than stable integral ones, our goal is to understand the computational complexity of finding an optimal (i.e., welfare-maximizing) or nearly-optimal stable fractional matching. We present simple approximation algorithms for this problem with weak welfare guarantees and, rather unexpectedly, we furthermore show that achieving better approximations is hard. This computational hardness persists even for approximate stability. To the best of our knowledge, these are the first computational complexity results for stable fractional matchings. En route to these results, we provide a number of structural observations.
In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While … In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While prior work has studied (approximate) equitability in isolation, we consider equitability in conjunction with other well-studied notions of fairness and economic efficiency. We show that the Leximin algorithm produces an allocation that satisfies equitability up to any good and Pareto optimality. We also give a novel algorithm that guarantees Pareto optimality and equitability up to one good in pseudopolynomial time. Our experiments on real-world preference data reveal that approximate envy-freeness, approximate equitability, and Pareto optimality can often be achieved simultaneously.
Consider the following problem faced by an online voting platform: A user is provided with a list of alternatives, and is asked to rank them in order of preference using … Consider the following problem faced by an online voting platform: A user is provided with a list of alternatives, and is asked to rank them in order of preference using only drag-and-drop operations. The platform's goal is to recommend an initial ranking that minimizes the time spent by the user in arriving at her desired ranking. We develop the first optimization framework to address this problem, and make theoretical as well as practical contributions. On the practical side, our experiments on Amazon Mechanical Turk provide two interesting insights about user behavior: First, that users' ranking strategies closely resemble selection or insertion sort, and second, that the time taken for a drag-and-drop operation depends linearly on the number of positions moved. These insights directly motivate our theoretical model of the optimization problem. We show that computing an optimal recommendation is NP-hard, and provide exact and approximation algorithms for a variety of special cases of the problem. Experimental evaluation on MTurk shows that, compared to a random recommendation strategy, the proposed approach reduces the (average) time-to-rank by up to 50%.
Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, … Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, each pair of agents might require the (hypothetical) removal of a different good, resulting in a weak aggregate guarantee. We study allocations that are nearly envy-free in aggregate, and define a novel fairness notion based on information withholding. Under this notion, an agent can withhold (or hide) some of the goods in its bundle and reveal the remaining goods to the other agents. We observe that in practice, envy-freeness can be achieved by withholding only a small number of goods overall. We show that finding allocations that withhold an optimal number of goods is computationally hard even for highly restricted classes of valuations. In contrast to the worst-case results, our experiments on synthetic and real-world preference data show that existing algorithms for finding EF1 allocations withhold close-to-optimal number of goods.
We study a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). After observing that, in … We study a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). After observing that, in this cardinal setting, stable fractional matchings can have much higher social welfare than stable integral ones, our goal is to understand the computational complexity of finding an optimal (i.e., welfare-maximizing) or nearly-optimal stable fractional matching. We present simple approximation algorithms for this problem with weak welfare guarantees and, rather unexpectedly, we furthermore show that achieving better approximations is hard. This computational hardness persists even for approximate stability. To the best of our knowledge, these are the first computational complexity results for stable fractional matchings. En route to these results, we provide a number of structural observations.
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social … We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APX-hard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special cases. First, we show that a simple, greedy algorithm provides a 1.061-approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NP-hard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social … We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APX-hard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special cases. First, we show that a simple, greedy algorithm provides a 1.061-approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NP-hard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials.
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social … We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APX-hard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special cases. First, we show that a simple, greedy algorithm provides a 1.061-approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NP-hard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto optimality (PO). While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare (NSW) objective is both EF1 and PO. However, the problem of maximizing NSW is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and PO; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally PO. Another contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the NSW objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al. (2017)). Unlike many of the existing approaches, our algorithm is completely combinatorial.
We revisit the problem of designing optimal, individually rational matching mechanisms (in a general sense, allowing for cycles in directed graphs), where each player—who is associated with a subset of … We revisit the problem of designing optimal, individually rational matching mechanisms (in a general sense, allowing for cycles in directed graphs), where each player—who is associated with a subset of vertices—matches as many of his own vertices when he opts into the matching mechanism as when he opts out. We offer a new perspective on this problem by considering an arbitrary graph, but assuming that vertices are associated with players at random. Our main result asserts that, under certain conditions, any fixed optimal matching is likely to be individually rational up to lower-order terms. We also show that a simple and practical mechanism is (fully) individually rational, and likely to be optimal up to lower-order terms. We discuss the implications of our results for market design in general, and kidney exchange in particular.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto optimality (PO). While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare (NSW) objective is both EF1 and PO. However, the problem of maximizing NSW is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and PO; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally PO. Another contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the NSW objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al. (2017)). Unlike many of the existing approaches, our algorithm is completely combinatorial.
We revisit the problem of designing optimal, individually rational matching mechanisms (in a general sense, allowing for cycles in directed graphs), where each player --- who is associated with a … We revisit the problem of designing optimal, individually rational matching mechanisms (in a general sense, allowing for cycles in directed graphs), where each player --- who is associated with a subset of vertices --- matches as many of his own vertices when he opts into the matching mechanism as when he opts out. We offer a new perspective on this problem by considering an arbitrary graph, but assuming that vertices are associated with players at random. Our main result asserts that, under certain conditions, any fixed optimal matching is likely to be individually rational up to lower-order terms. We also show that a simple and practical mechanism is (fully) individually rational, and likely to be optimal up to lower-order terms. We discuss the implications of our results for market design in general, and kidney exchange in particular.

Commonly Cited References

Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted … Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)
In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While … In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While prior work has studied (approximate) equitability in isolation, we consider equitability in conjunction with other well-studied notions of fairness and economic efficiency. We show that the Leximin algorithm produces an allocation that satisfies equitability up to any good and Pareto optimality. We also give a novel algorithm that guarantees Pareto optimality and equitability up to one good in pseudopolynomial time. Our experiments on real-world preference data reveal that approximate envy-freeness, approximate equitability, and Pareto optimality can often be achieved simultaneously.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, … We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
We characterize solutions for two-sided matching, both in the transferable - and in the nontransferable - utility frameworks, using a cardinal formulation. Our approach makes the comparison of the matching … We characterize solutions for two-sided matching, both in the transferable - and in the nontransferable - utility frameworks, using a cardinal formulation. Our approach makes the comparison of the matching models with and without transfers particularly transparent. We introduce the concept of a no-trade matching to study the role of transfers in matching. A no-trade matching is one in which the availability of transfers do not affect the outcome.
We present algorithmic applications of an approximate version of Carathéodory's theorem. The theorem states that given a set of vectors $X$ in $\mathbb{R}^d$, for every vector in the convex hull … We present algorithmic applications of an approximate version of Carathéodory's theorem. The theorem states that given a set of vectors $X$ in $\mathbb{R}^d$, for every vector in the convex hull of $X$ there exists an $\varepsilon$-close (under the $p$-norm distance for $2\leq p < \infty$) vector that can be expressed as a convex combination of at most $b$ vectors of $X$, where the bound $b$ depends on $\varepsilon$ and the norm $p$ and is independent of the dimension $d$. This theorem can be derived by instantiating Maurey's lemma, early references to which can be found in the work of Pisier (1981) and Carl (1985). However, in this paper we present a self-contained proof of this result. Using this theorem we establish that in a bimatrix game with $ n \times n$ payoff matrices $A, B$, if the number of nonzero entries in any column of $A+B$ is at most $s$ then an $\varepsilon$-Nash equilibrium of the game can be computed in time $n^{O(\frac{\log s }{\varepsilon^2})}$. This, in particular, gives us a polynomial-time approximation scheme for Nash equilibrium in games with fixed column sparsity $s$. Moreover, for arbitrary bimatrix games the running time of our algorithm matches the best-known upper bound, which was obtained by Lipton, Markakis, and Mehta (2003). The theorem also leads to an additive approximation algorithm for the normalized densest $k$-subgraph problem. Given a graph with $n$ vertices and maximum degree $d$, our algorithm determines a size-$k$ subgraph with normalized density within $\varepsilon$ of the optimal in time $n^{O(\frac{\log d}{\varepsilon^2})}$.
We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are `goods' i.e., they yield positive utility for the … We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are `goods' i.e., they yield positive utility for the agents. There is also some work where the items are `chores' that yield negative utility for the agents. In this paper, we consider a more general scenario where an agent may have negative or positive utility for each item. This framework captures, e.g., fair task assignment, where agents can have both positive and negative utilities for each task. We show that whereas some of the positive axiomatic and computational results extend to this more general setting, others do not. We present several new and efficient algorithms for finding fair allocations in this general setting. We also point out several gaps in the literature regarding the existence of allocations satisfying certain fairness and efficiency properties and further study the complexity of computing such allocations.
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best … We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by Bogomolnaia et al (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in Devanur et al (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini Sgouritsapp.2658 - 2672Chapter DOI:https://doi.org/10.1137/1.9781611975994.162PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute m goods to n agents in a “fair” manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is “envy-freeness up to any good” (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when n = 3. We show there is always a partition of the set of goods into n + 1 subsets (X1, …, Xn, P) where for i ϵ [n], Xi is the bundle allocated to agent i and the set P is unallocated (or donated to charity) such that we have: (1)envy-freeness up to any good,(2)no agent values P higher than her own bundle, and(3)fewer than n goods go to charity, i.e., |P| < n (typically m ≫ n). Our proof is constructive. When agents have additive valuations and |P| is large (i.e., when |P| is close to n), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
The leximin solution -- which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth -- is known to provide EFX (envy-free up to … The leximin solution -- which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth -- is known to provide EFX (envy-free up to any good) fairness guarantee in some contexts when allocating indivisible goods. However, it remains unknown how fair the leximin solution is when used to allocate indivisible chores. In this paper, we demonstrate that the leximin solution can be modified to also provide compelling fairness guarantees for the allocation of indivisible chores. First, we generalize the definition of the leximin solution. Then, we show that the leximin solution finds a PROP1 (proportional up to one good) and PO (Pareto-optimal) allocation for 3 or 4 agents in the context of chores allocation with additive distinct valuations. Additionally, we prove that the leximin solution is EFX for combinations of goods and chores for agents with general but identical valuations.
We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the … We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the most compelling notion of fairness in the context of indivisible goods. Although the existence of EFX is not known beyond the simple case of two agents with subadditive valuations, some good approximations of EFX are known to exist, namely 1/2-EFX allocation and EFX allocations with bounded charity. Nash welfare (the geometric mean of agents' valuations) is one of the most commonly used measures of efficiency. In case of additive valuations, an allocation that maximizes Nash welfare also satisfies fairness properties like Envy-Free up to one good (EF1). Although there is substantial work on approximating Nash welfare when agents have additive valuations, very little is known when agents have subadditive valuations. In this paper, we design a polynomial-time algorithm that outputs an allocation that satisfies either of the two approximations of EFX as well as achieves an O(n) approximation to the Nash welfare. Our result also improves the current best-known approximation of O(n log n) and O(m) to Nash welfare when agents have submodular and subadditive valuations, respectively. Furthermore, our technique also gives an O(n) approximation to a family of welfare measures, p-mean of valuations for p in (-\infty, 1], thereby also matching asymptotically the current best approximation ratio for special cases like p = -\infty while also retaining the remarkable fairness properties.
A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If … A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree 1 and concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale–Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous, and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance, if all items are bads), the competitive profiles are strictly negative and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single-valued selection can be continuous or resource monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.
Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, … Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, each pair of agents might require the (hypothetical) removal of a different good, resulting in a weak aggregate guarantee. We study allocations that are nearly envy-free in aggregate, and define a novel fairness notion based on information withholding. Under this notion, an agent can withhold (or hide) some of the goods in its bundle and reveal the remaining goods to the other agents. We observe that in practice, envy-freeness can be achieved by withholding only a small number of goods overall. We show that finding allocations that withhold an optimal number of goods is computationally hard even for highly restricted classes of valuations. In contrast to the worst-case results, our experiments on synthetic and real-world preference data show that existing algorithms for finding EF1 allocations withhold a close-to-optimal amount of information.
We study fair allocation of indivisible chores (i.e., items with non-positive value) among agents with additive valuations. An allocation is deemed fair if it is (approximately) equitable, which means that … We study fair allocation of indivisible chores (i.e., items with non-positive value) among agents with additive valuations. An allocation is deemed fair if it is (approximately) equitable, which means that the disutilities of the agents are (approximately) equal. Our main theoretical contribution is to show that there always exists an allocation that is simultaneously equitable up to one chore (EQ1) and Pareto optimal (PO), and to provide a pseudopolynomial-time algorithm for computing such an allocation. In addition, we observe that the Leximin solution---which is known to satisfy a strong form of approximate equitability in the goods setting---fails to satisfy even EQ1 for chores. It does, however, satisfy a novel fairness notion that we call equitability up to any duplicated chore. Our experiments on synthetic as well as real-world data obtained from the Spliddit website reveal that the algorithms considered in our work satisfy approximate fairness and efficiency properties significantly more often than the algorithm currently deployed on Spliddit.
We initiate the work on fair and strategyproof allocation of indivisible chores. The fairness concept we consider in this paper is maxmin share (MMS) fairness. We consider three previously studied … We initiate the work on fair and strategyproof allocation of indivisible chores. The fairness concept we consider in this paper is maxmin share (MMS) fairness. We consider three previously studied models of information elicited from the agents: the ordinal model, the cardinal model, and the public ranking model in which the ordinal preferences are publicly known. We present both positive and negative results on the level of MMS approximation that can be guaranteed if we require the algorithm to be strategyproof. Our results uncover some interesting contrasts between the approximation ratios achieved for chores versus goods.
A set of objects, some goods and some bads, is to be divided fairly among agents with different tastes, modeled by additive utility-functions. If the objects cannot be shared, so … A set of objects, some goods and some bads, is to be divided fairly among agents with different tastes, modeled by additive utility-functions. If the objects cannot be shared, so that each of them must be entirely allocated to a single agent, then fair division may not exist. What is the smallest number of objects that must be shared between two or more agents in order to attain a fair division? We focus on Pareto-optimal, envy-free and/or proportional allocations. We show that, for a generic instance of the problem --- all instances except of a zero-measure set of degenerate problems --- a fair and Pareto-optimal division with the smallest possible number of shared objects can be found in polynomial time, assuming that the number of agents is fixed. The problem becomes computationally hard for degenerate instances, where the agents' valuations are aligned for many objects.
The stable marriage problem and its extensions have been extensively studied, with much of the work in the literature assuming that agents fully know their own preferences over alternatives. This … The stable marriage problem and its extensions have been extensively studied, with much of the work in the literature assuming that agents fully know their own preferences over alternatives. This assumption however is not always practical (especially in large markets) and agents usually need to go through some costly deliberation process in order to learn their preferences. In this paper we assume that such deliberations are carried out via interviews, where an interview involves a man and a woman, each of whom learns information about the other as a consequence. If everybody interviews everyone else, then clearly agents can fully learn their preferences. But interviews are costly, and we may wish to minimize their use. It is often the case, especially in practical settings, that due to correlation between agents' preferences, it is unnecessary for all potential interviews to be carried out in order to obtain a stable matching. Thus the problem is to find a good strategy for interviews to be carried out in order to minimize their use, whilst leading to a stable matching. One way to evaluate the performance of an interview strategy is to compare it against a naive algorithm that conducts all interviews. We argue however that a more meaningful comparison would be against an optimal offline algorithm that has access to agents' preference orderings under complete information. We show that, unless P=NP, no offline algorithm can compute the optimal interview strategy in polynomial time. If we are additionally aiming for a particular stable matching (perhaps one with certain desirable properties), we provide restricted settings under which efficient optimal offline algorithms exist.
We introduce Flexible Representative Democracy (FRD), a novel hybrid of Representative Democracy (RD) and Direct Democracy (DD), in which voters can alter the issue-dependent weights of a set of elected … We introduce Flexible Representative Democracy (FRD), a novel hybrid of Representative Democracy (RD) and Direct Democracy (DD), in which voters can alter the issue-dependent weights of a set of elected representatives. In line with the literature on Interactive Democracy, our model allows the voters to actively determine the degree to which the system is direct versus representative. However, unlike Liquid Democracy, FRD uses strictly non-transitive delegations, making delegation cycles impossible, preserving privacy and anonymity, and maintaining a fixed set of accountable elected representatives. We present FRD and analyze it using a computational approach with issues that are independent, binary, and symmetric; we compare the outcomes of various democratic systems using Direct Democracy with majority voting and full participation as an ideal baseline. We find through theoretical and empirical analysis that FRD can yield significant improvements over RD for emulating DD with full participation.
The Hospitals / Residents problem with Couples (hrc) models the allocation of intending junior doctors to hospitals where couples are allowed to submit joint preference lists over pairs of (typically … The Hospitals / Residents problem with Couples (hrc) models the allocation of intending junior doctors to hospitals where couples are allowed to submit joint preference lists over pairs of (typically geographically close) hospitals. It is known that a stable matching need not exist, so we consider min bp hrc, the problem of finding a matching that admits the minimum number of blocking pairs (i.e., is "as stable as possible"). We show that this problem is NP-hard and difficult to approximate even in the highly restricted case that each couple finds only one hospital pair acceptable. However if we further assume that the preference list of each single resident and hospital is of length at most 2, we give a polynomial-time algorithm for this case. We then present the first Integer Programming (IP) and Constraint Programming (CP) models for min bp hrc. Finally, we discuss an empirical evaluation of these models applied to randomly-generated instances of min bp hrc. We find that on average, the CP model is about 1.15 times faster than the IP model, and when presolving is applied to the CP model, it is on average 8.14 times faster. We further observe that the number of blocking pairs admitted by a solution is very small, i.e., usually at most 1, and never more than 2, for the (28,000) instances considered.
We present a new model that describes the process of electing a group of representatives (e.g., a parliament) for a group of voters. In this model, called the voting committee … We present a new model that describes the process of electing a group of representatives (e.g., a parliament) for a group of voters. In this model, called the voting committee model, the elected group of representatives runs a number of ballots to make final decisions regarding various issues. The satisfaction of voters comes from the final decisions made by the elected committee. Our results suggest that depending on a decision system used by the committee to make these final decisions, different multi-winner election rules are most suitable for electing the committee. Furthermore, we show that if we allow not only a committee, but also an election rule used to make final decisions, to depend on the voters' preferences, we can obtain an even better representation of the voters.
We initiate the study of indivisible chore allocation for agents with asymmetric shares. The fairness concept we focus on is the weighted natural generalization of maxmin share: WMMS fairness and … We initiate the study of indivisible chore allocation for agents with asymmetric shares. The fairness concept we focus on is the weighted natural generalization of maxmin share: WMMS fairness and OWMMS fairness. We first highlight the fact that commonly-used algorithms that work well for allocation of goods to asymmetric agents, and even for chores to symmetric agents do not provide good approximations for allocation of chores to asymmetric agents under WMMS. As a consequence, we present a novel polynomial-time constant-approximation algorithm, via linear program, for OWMMS. For two special cases: the binary valuation case and the 2-agent case, we provide exact or better constant-approximation algorithms.
In distortion-based analysis of social choice rules over metric spaces, voters and candidates are jointly embedded in a metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only … In distortion-based analysis of social choice rules over metric spaces, voters and candidates are jointly embedded in a metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only this ordinal (comparison) information, must select a candidate approximately minimizing the sum of distances from all voters to the chosen candidate. It is known that while the Copeland rule and related rules guarantee distortion at most 5, the distortion of many other standard voting rules, such as Plurality, Veto, or k-approval, grows unboundedly in the number n of candidates.An advantage of Plurality, Veto, or k-approval with small k is that they require less communication from the voters; all deterministic social choice rules known to achieve constant distortion require voters to transmit their complete rankings of all candidates. This motivates our study of the tradeoff between the distortion and the amount of communication in deterministic social choice rules.We show that any one-round deterministic voting mechanism in which each voter communicates only the candidates she ranks in a given set of k positions must have distortion at least 2n-k/k; we give a mechanism achieving an upper bound of O(n/k), which matches the lower bound up to a constant. For more general communication-bounded voting mechanisms, in which each voter communicates b bits of information about her ranking, we show a slightly weaker lower bound of Ω(n/b) on the distortion.For randomized mechanisms, Random Dictatorship achieves expected distortion strictly smaller than 3, almost matching a lower bound of 3 − 2/n for any randomized mechanism that only receives each voter's top choice. We close this gap, by giving a simple randomized social choice rule which only uses each voter's first choice, and achieves expected distortion 3 − 2/n.
In contrast to the classical cake-cutting problem (how to fairly divide a desirable object), "chore division" is the problem of how to divide an undesirable object. We develop the first … In contrast to the classical cake-cutting problem (how to fairly divide a desirable object), "chore division" is the problem of how to divide an undesirable object. We develop the first explicit algorithm for envy-free chore division among N people, a counterpart to the N-person envy-free cake-division solution of Brams-Taylor (1995). This is accomplished by exploiting a notion of "irrevocable advantage" for chores. We discuss the differences between cake-cutting and chore division and additional problems encountered in chore division.
We propose a novel and flexible rank-breaking-then-composite-marginal-likelihood (RBCML) framework for learning random utility models (RUMs), which include the Plackett-Luce model. We characterize conditions for the objective function of RBCML to … We propose a novel and flexible rank-breaking-then-composite-marginal-likelihood (RBCML) framework for learning random utility models (RUMs), which include the Plackett-Luce model. We characterize conditions for the objective function of RBCML to be strictly log-concave by proving that strict log-concavity is preserved under convolution and marginalization. We characterize necessary and sufficient conditions for RBCML to satisfy consistency and asymptotic normality. Experiments on synthetic data show that RBCML for Gaussian RUMs achieves better statistical efficiency and computational efficiency than the state-of-the-art algorithm and our RBCML for the Plackett-Luce model provides flexible tradeoffs between running time and statistical efficiency.
We consider a fair division setting in which m indivisible items are to be allocated among n agents, where the agents have additive utilities and the agents’ utilities for individual … We consider a fair division setting in which m indivisible items are to be allocated among n agents, where the agents have additive utilities and the agents’ utilities for individual items are independently sampled from a distribution. Previous work has shown that an envy-free allocation is likely to exist when m = Ω (n log n) but not when m = n + o (n), and left open the question of determining where the phase transition from non-existence to existence occurs. We show that, surprisingly, there is in fact no universal point of transition— instead, the transition is governed by the divisibility relation between m and n. On the one hand, if m is divisible by n, an envy-free allocation exists with high probability as long as m ≥ 2n. On the other hand, if m is not “almost” divisible by , an envy-free allocation is unlikely to exist even when m = Θ(n log n)/log log n).
We study a natural generalization of stable matching to the maximum weight stable matching problem and we obtain a combinatorial polynomial time algorithm for it by reducing it to the … We study a natural generalization of stable matching to the maximum weight stable matching problem and we obtain a combinatorial polynomial time algorithm for it by reducing it to the problem of finding a maximum weight ideal cut in a DAG. We give the first polynomial time algorithm for the latter problem; this algorithm is also combinatorial. The combinatorial nature of our algorithms not only means that they are efficient but also that they enable us to obtain additional structural and algorithmic results: - We show that the set, $\cal M'$, of maximum weight stable matchings forms a sublattice $\cal L'$ of the lattice $\cal L$ of all stable matchings. - We give an efficient algorithm for finding boy-optimal and girl-optimal matchings in $\cal M'$. - We generalize the notion of rotation, a central structural notion in the context of the stable matching problem, to meta-rotation. Just as rotations help traverse the lattice of all stable matchings, macro-rotations help traverse the sublattice over $\cal M'$.
There is a heterogeneous resource that contains both good parts and bad parts, for example, a cake with some parts burnt, a land-estate with some parts heavily taxed, or a … There is a heterogeneous resource that contains both good parts and bad parts, for example, a cake with some parts burnt, a land-estate with some parts heavily taxed, or a chore with some parts fun to do. The resource has to be divided fairly among $n$ agents with different preferences, each of whom has a personal value-density function on the resource. The value-density functions can accept any real value --- positive, negative or zero. Each agent should receive a connected piece and no agent should envy another agent. We prove that such a division exists for 3 agents and present preliminary positive results for larger numbers of agents.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto optimality (PO). While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare (NSW) objective is both EF1 and PO. However, the problem of maximizing NSW is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and PO; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally PO. Another contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the NSW objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al. (2017)). Unlike many of the existing approaches, our algorithm is completely combinatorial.
This thesis was submitted as partial fulfillment of the requirements for the Masters Degree in the Department of Computer Science, Bar-Ilan University. final step in getting an Israeli M.D. is … This thesis was submitted as partial fulfillment of the requirements for the Masters Degree in the Department of Computer Science, Bar-Ilan University. final step in getting an Israeli M.D. is performing a year-long internship in one of the hospitals in Israel. Internships are decided upon by a lottery, which is known as The Internship Lottery. In 2014 we redesigned the lottery, replacing it with a more efficient one. new method is based on calculating a tentative lottery, in which each student has some probability of getting to each hospital. Then a computer program trades between the students, where trade is performed only if it is beneficial to both sides. This trade creates surplus, which translates to more students getting one of their top choices. average student improved his place by $0.91$ seats. new method can improve the welfare of medical graduates, by giving them more probability to get to one of their top choices. It can be applied in internship markets in other countries as well. This thesis presents the market, the redesign process and the new mechanism which is now in use. There are two main lessons that we have learned from this market. first is the Do No Harm principle, which states that (almost) all participants should prefer the new mechanism to the old one. second is that new approaches need to be used when dealing with two-body problems in object assignment. We focus on the second lesson, and study two-body problems in the context of the assignment problem. We show that decomposing stochastic assignment matrices to deterministic allocations is NP-hard in the presence of couples, and present a polynomial time algorithm with the optimal worst case guarantee. We also study the performance of our algorithm on real-world and on simulated data.
We present the first constant-factor approximation algorithm for maximizing the Nash social welfare when allocating indivisible items to agents with budget-additive valuation functions. Budget-additive valuations represent an important class of … We present the first constant-factor approximation algorithm for maximizing the Nash social welfare when allocating indivisible items to agents with budget-additive valuation functions. Budget-additive valuations represent an important class of submodular functions. They have attracted a lot of research interest in recent years due to many interesting applications. For every $\varepsilon > 0$, our algorithm obtains a $(2.404 + \varepsilon)$-approximation in time polynomial in the input size and $1/\varepsilon$. Our algorithm relies on rounding an approximate equilibrium in a linear Fisher market where sellers have earning limits (upper bounds on the amount of money they want to earn) and buyers have utility limits (upper bounds on the amount of utility they want to achieve). In contrast to markets with either earning or utility limits, these markets have not been studied before. They turn out to have fundamentally different properties. Although the existence of equilibria is not guaranteed, we show that the market instances arising from the Nash social welfare problem always have an equilibrium. Further, we show that the set of equilibria is not convex, answering a question of [Cole et al, EC 2017]. We design an FPTAS to compute an approximate equilibrium, a result that may be of independent interest.
In this paper, we consider the problem of how to fairly dividing $m$ indivisible chores among $n$ agents. The fairness measure we considered here is the maximin share. The previous … In this paper, we consider the problem of how to fairly dividing $m$ indivisible chores among $n$ agents. The fairness measure we considered here is the maximin share. The previous best known result is that there always exists a $\frac{4}{3}$ approximation maximin share allocation. With a novel algorithm, we can always find a $\frac{11}{9}$ approximation maximin share allocation for any instances. We also discuss how to improve the efficiency of the algorithm and its connection to the job scheduling problem.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several … In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have also been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight or almost tight results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
We study multi-type housing markets, where there are p ≥ 2 types of items, each agent is initially endowed one item of each type, and the goal is to design … We study multi-type housing markets, where there are p ≥ 2 types of items, each agent is initially endowed one item of each type, and the goal is to design mechanisms without monetary transfer to (re)allocate items to the agents based on their preferences over bundles of items, such that each agent gets one item of each type. In sharp contrast to classical housing markets, previous studies in multi-type housing markets have been hindered by the lack of natural solution concepts, because the strict core might be empty. We break the barrier in the literature by leveraging AI techniques and making natural assumptions on agents’ preferences. We show that when agents’ preferences are lexicographic, even with different importance orders, the classical top-trading-cycles mechanism can be extended while preserving most of its nice properties. We also investigate computational complexity of checking whether an allocation is in the strict core and checking whether the strict core is empty. Our results convey an encouragingly positive message: it is possible to design good mechanisms for multi-type housing markets under natural assumptions on preferences.
We introduce a graphical framework for fair division in cake cutting, where comparisons between agents are limited by an underlying network structure. We generalize the classical fairness notions of envy-freeness … We introduce a graphical framework for fair division in cake cutting, where comparisons between agents are limited by an underlying network structure. We generalize the classical fairness notions of envy-freeness and proportionality in this graphical setting. An allocation is called envy-free on a graph if no agent envies any of her neighbor's share, and is called proportional on a graph if every agent values her own share no less than the average among her neighbors, with respect to her own measure. These generalizations enable new research directions in developing simple and efficient algorithms that can produce fair allocations under specific graph structures. On the algorithmic frontier, we first propose a moving-knife algorithm that outputs an envy-free allocation on trees. The algorithm is significantly simpler than the discrete and bounded envy-free algorithm introduced in [Aziz and Mackenzie, 2016] for compete graphs. Next, we give a discrete and bounded algorithm for computing a proportional allocation on transitive closure of trees, a class of graphs by taking a rooted tree and connecting all its ancestor-descendant pairs.
Finding an envy-free allocation of indivisible resources to agents is a central task in many multiagent systems. Often, non-trivial envy-free allocations do not exist, and finding them can be a … Finding an envy-free allocation of indivisible resources to agents is a central task in many multiagent systems. Often, non-trivial envy-free allocations do not exist, and finding them can be a computationally hard task. Classic envy-freeness requires that every agent likes the resources allocated to it at least as much as the resources allocated to any other agent. In many situations this assumption can be relaxed since agents often do not even know each other. We enrich the envy-freeness concept by taking into account (directed) social networks of the agents. Thus, we require that every agent likes its own allocation at least as much as those of all its (out)neighbors. This leads to a local'' concept of envy-freeness. We also consider a strong variant where every agent must like its own allocation more than those of all its (out)neighbors. We analyze the classic and the parameterized complexity of finding allocations that are envy-free with respect to one of the variants of our new concept, and that either are complete, are Pareto-efficient, or optimize the utilitarian social welfare. To this end, we study different restrictions of the agents' preferences and of the social network structure. We identify cases that become easier (from Σ^\mathrmP _2$-hard or $\mathrmNP $-hard to $\mathrmP $) and cases that become harder (from $\mathrmP $ to $\mathrmNP $-hard) when comparing classic envy-freeness with our graph-based envy-freeness. Furthermore, we spot cases where graph envy-freeness is easier to decide than strong graph envy-freeness, and vice versa.
We propose an analytical framework for studying parallel repetition, a basic product operation for one-round twoplayer games. In this framework, we consider a relaxation of the value of projection games. … We propose an analytical framework for studying parallel repetition, a basic product operation for one-round twoplayer games. In this framework, we consider a relaxation of the value of projection games. We show that this relaxation is multiplicative with respect to parallel repetition and that it provides a good approximation to the game value. Based on this relaxation, we prove the following improved parallel repetition bound: For every projection game G with value at most ρ, the k-fold parallel repetition G⊗k has value at most
We present new integer linear programming (ILP) models for NP-hard optimisation problems in instances of the Stable Marriage problem with Ties and Incomplete lists (SMTI) and its many-to-one generalisation, the … We present new integer linear programming (ILP) models for NP-hard optimisation problems in instances of the Stable Marriage problem with Ties and Incomplete lists (SMTI) and its many-to-one generalisation, the Hospitals / Residents problem with Ties (HRT). These models can be used to efficiently solve these optimisation problems when applied to (i) instances derived from real-world applications, and (ii) larger instances that are randomly-generated. In the case of SMTI, we consider instances arising from the pairing of children with adoptive families, where preferences are obtained from a quality measure of each possible pairing of child to family. In this case we seek a maximum weight stable matching. We present new algorithms for preprocessing instances of SMTI with ties on both sides, as well as new ILP models. Algorithms based on existing state-of-the-art models only solve 6 of our 22 real-world instances within an hour per instance, and our new models solve all 22 instances within a mean runtime of 60 seconds. For HRT, we consider instances derived from the problem of assigning junior doctors to foundation posts in Scottish hospitals. Here we seek a maximum size stable matching. We show how to extend our models for SMTI to the HRT case. For the real instances, we reduce the mean runtime from an average of 144 seconds when using state-of-the-art methods, to 3 seconds when using our new ILP-based algorithms. We also show that our models outperform considerably state-of-the-art models on larger randomly-generated instances of SMTI and HRT.
We consider a multi-agent resource allocation setting that models the assignment of papers to reviewers. A recurring issue in allocation problems is the compatibility of welfare/efficiency and fairness. Given an … We consider a multi-agent resource allocation setting that models the assignment of papers to reviewers. A recurring issue in allocation problems is the compatibility of welfare/efficiency and fairness. Given an oracle to find a welfare-achieving allocation, we embed such an oracle into a flexible algorithm called the Constrained Round Robin (CRR) algorithm, that achieves the required welfare level. Our algorithm also allows the system designer to lower the welfare requirements in order to achieve a higher degree of fairness. If the welfare requirement is lowered enough, a strengthening of envy-freeness up to one item is guaranteed. Hence, our algorithm can be viewed as a computationally efficient way to interpolate between welfare and approximate envy-freeness in allocation problems.