Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (15)

We study the computational complexity of finding a competitive equilibrium (CE) with chores when agents have linear preferences. CE is one of the most preferred mechanisms for allocating a set … We study the computational complexity of finding a competitive equilibrium (CE) with chores when agents have linear preferences. CE is one of the most preferred mechanisms for allocating a set of items among agents. CE with equal incomes (CEEI), Fisher, and Arrow-Debreu (exchange) are the fundamental economic models to study allocation problems, where CEEI is a special case of Fisher and Fisher is a special case of exchange. When the items are goods (giving utility), the CE set is convex even in the exchange model, facilitating several combinatorial polynomial-time algorithms (starting with the seminal work of Devanur, Papadimitriou, Saberi and Vazirani) for all of these models. In sharp contrast, when the items are chores (giving disutility), the CE set is known to be non-convex and disconnected even in the CEEI model. Further, no combinatorial algorithms or hardness results are known for these models. In this paper, we give two main results for CE with chores: 1) A combinatorial algorithm to compute a $(1-\varepsilon)$-approximate CEEI in time $\tilde{\mathcal{O}}(n^4m^2 / \varepsilon^2)$, where $n$ is the number of agents and $m$ is the number of chores. 2) PPAD-hardness of finding a $(1-1/\mathit{poly}(n))$-approximate CE in the exchange model under a sufficient condition. To the best of our knowledge, these results show the first separation between the CEEI and exchange models when agents have linear preferences, assuming PPAD $\neq $ P. Finally, we show that our new insight implies a straightforward proof of the existence of an allocation that is both envy-free up to one chore (EF1) and Pareto optimal (PO) in the discrete setting when agents have factored bivalued preferences.
We study the problem of allocating a set of indivisible goods among a set of agents with 2-value additive valuations. In this setting, each good is valued either $1$ or … We study the problem of allocating a set of indivisible goods among a set of agents with 2-value additive valuations. In this setting, each good is valued either $1$ or $\frac{p}{q}$, for some fixed co-prime numbers $p,q\in N$ such that $1\leq q < p$, and the value of a bundle is the sum of the values of the contained goods. Our goal is to find an allocation which maximizes the Nash social welfare (NSW), i.e., the geometric mean of the valuations of the agents. In this work, we give a complete characterization of polynomial-time tractability of NSW maximization that solely depends on the values of $q$. We start by providing a rather simple polynomial-time algorithm to find a maximum NSW allocation when the valuation functions are integral, that is, $q=1$. We then exploit more involved techniques to get an algorithm producing a maximum NSW allocation for the half-integral case, that is, $q=2$. Finally, we show that such an improvement cannot be further extended to the case $q=3$; indeed, we prove that it is NP-hard to compute an allocation with maximum NSW whenever $q\geq 3$.
We study the problem of allocating a set of indivisible goods among agents with 2-value additive valuations. Our goal is to find an allocation with maximum Nash social welfare, i.e., … We study the problem of allocating a set of indivisible goods among agents with 2-value additive valuations. Our goal is to find an allocation with maximum Nash social welfare, i.e., the geometric mean of the valuations of the agents. We give a polynomial-time algorithm to find a Nash social welfare maximizing allocation when the valuation functions are integrally 2-valued, i.e., each agent has a value either $1$ or $p$ for each good, for some positive integer $p$. We then extend our algorithm to find a better approximation factor for general 2-value instances.
We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the … We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the most compelling notion of fairness in the context of indivisible goods. Although the existence of EFX is not known beyond the simple case of two agents with subadditive valuations, some good approximations of EFX are known to exist, namely 1/2-EFX allocation and EFX allocations with bounded charity. Nash welfare (the geometric mean of agents' valuations) is one of the most commonly used measures of efficiency. In case of additive valuations, an allocation that maximizes Nash welfare also satisfies fairness properties like Envy-Free up to one good (EF1). Although there is substantial work on approximating Nash welfare when agents have additive valuations, very little is known when agents have subadditive valuations. In this paper, we design a polynomial-time algorithm that outputs an allocation that satisfies either of the two approximations of EFX as well as achieves an O(n) approximation to the Nash welfare. Our result also improves the current best-known approximation of O(n log n) and O(m) to Nash welfare when agents have submodular and subadditive valuations, respectively. Furthermore, our technique also gives an O(n) approximation to a family of welfare measures, p-mean of valuations for p in (-\infty, 1], thereby also matching asymptotically the current best approximation ratio for special cases like p = -\infty while also retaining the remarkable fairness properties.
We study the fair division problem of allocating a mixed manna under additively separable piecewise linear concave (SPLC) utilities. A mixed manna contains goods that everyone likes and bads that … We study the fair division problem of allocating a mixed manna under additively separable piecewise linear concave (SPLC) utilities. A mixed manna contains goods that everyone likes and bads that everyone dislikes, as well as items that some like and others dislike. The seminal work of Bogomolnaia et al. [14] argue why allocating a mixed manna is genuinely more complicated than a good or a bad manna, and why competitive equilibrium is the best mechanism. They also provide the existence of equilibrium and establish its peculiar properties (e.g., non-convex and disconnected set of equilibria even under linear utilities), but leave the problem of computing an equilibrium open.Our main result is a simplex-like algorithm based on Lemke's scheme for computing a competitive allocation of a mixed manna under SPLC utilities, a strict generalization of linear. Experimental results on randomly generated instances suggest that our algorithm will be fast in practice. The problem is known to be PPAD-hard for the case of good manna [24], and we also show a similar result for the case of bad manna. Given these PPAD-hardness results, designing such an algorithm is the only non-enumerative option known.Our algorithm also yields several new structural properties as simple corollaries. We obtain a (constructive) proof of existence for a far more general setting, membership of the problem in PPAD, rational-valued solution, and odd number of solutions property. The last property also settles the conjecture of [14] in the affirmative.
The fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a "fair" manner, where every … The fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a "fair" manner, where every agent has a valuation for each subset of goods. We assume monotone valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is "envy-freeness up to any good," EFX, where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists. We show there is always a partition of the set of goods into $n+1$ subsets $(X_1,\ldots,X_n,P)$, where for $i \in [n]$, $X_i$ is the bundle allocated to agent $i$ and the set $P$ is unallocated (or donated to charity) such that we have (1) envy-freeness up to any good, (2) no agent values $P$ higher than her own bundle, and (3) fewer than $n$ goods go to charity, i.e., $|P| < n$ (typically $m \gg n$). Our proof is constructive and leads to a pseudopolynomial time algorithm to find such an allocation. When agents have additive valuations and $|{P}|$ is large (i.e., when $|P|$ is close to $n$), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation that is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. (Very recently and independently, Amanatidis, Ntokos, and Markakis [Theoret. Comput. Sci., 841 (2020), pp. 94--109], also showed the existence of a 4/7-GMMS allocation.)
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini Sgouritsapp.2658 - 2672Chapter DOI:https://doi.org/10.1137/1.9781611975994.162PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute m goods to n agents in a “fair” manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is “envy-freeness up to any good” (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when n = 3. We show there is always a partition of the set of goods into n + 1 subsets (X1, …, Xn, P) where for i ϵ [n], Xi is the bundle allocated to agent i and the set P is unallocated (or donated to charity) such that we have: (1)envy-freeness up to any good,(2)no agent values P higher than her own bundle, and(3)fewer than n goods go to charity, i.e., |P| < n (typically m ≫ n). Our proof is constructive. When agents have additive valuations and |P| is large (i.e., when |P| is close to n), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a fair manner, where every agent … Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a fair manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is up to any (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when $n=3$. We show there is always a partition of the set of goods into $n+1$ subsets $(X_1,\ldots,X_n,P)$ where for $i \in [n]$, $X_i$ is the bundle allocated to agent $i$ and the set $P$ is unallocated (or donated to charity) such that we have$\colon$ 1) envy-freeness up to any good, 2) no agent values $P$ higher than her own bundle, and 3) fewer than $n$ goods go to charity, i.e., $|P| < n$ (typically $m \gg n$). Our proof is constructive. When agents have additive valuations and $\lvert P \rvert$ is large (i.e., when $|P|$ is close to $n$), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is $4/7$ groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of $1/2$ known for an approximate GMMS allocation.
Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a "fair" manner, where every agent … Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute $m$ goods to $n$ agents in a "fair" manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is "envy-freeness up to any good" (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when $n=3$. We show there is always a partition of the set of goods into $n+1$ subsets $(X_1,\ldots,X_n,P)$ where for $i \in [n]$, $X_i$ is the bundle allocated to agent $i$ and the set $P$ is unallocated (or donated to charity) such that we have$\colon$ 1) envy-freeness up to any good, 2) no agent values $P$ higher than her own bundle, and 3) fewer than $n$ goods go to charity, i.e., $|P| < n$ (typically $m \gg n$). Our proof is constructive. When agents have additive valuations and $\lvert P \rvert$ is large (i.e., when $|P|$ is close to $n$), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is $4/7$ groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of $1/2$ known for an approximate GMMS allocation.
We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear … We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an $\tilde{\mathcal{O}}((n+m)^7 \log^3(UW))$ algorithm where $n$, $m$, $U$ and $W$ refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is $U^{\Omega(n)}$ and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms.
We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear … We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an $\tilde{\mathcal{O}}((n+m)^7 \log^3(UW))$ algorithm where $n$, $m$, $U$ and $W$ refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is $U^{\Omega(n)}$ and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms.
In the classic polyline simplification problem we want to replace a given polygonal curve $P$, consisting of $n$ vertices, by a subsequence $P'$ of $k$ vertices from $P$ such that … In the classic polyline simplification problem we want to replace a given polygonal curve $P$, consisting of $n$ vertices, by a subsequence $P'$ of $k$ vertices from $P$ such that the polygonal curves $P$ and $P'$ are as close as possible. Closeness is usually measured using the Hausdorff or Frechet distance. These distance measures can be applied globally, i.e., to the whole curves $P$ and $P'$, or locally, i.e., to each simplified subcurve and the line segment that it was replaced with separately (and then taking the maximum). This gives rise to four problem variants: Global-Hausdorff (known to be NP-hard), Local-Hausdorff (in time $O(n^3)$), Global-Frechet (in time $O(k n^5)$), and Local-Frechet (in time $O(n^3)$). Our contribution is as follows. - Cubic time for all variants: For Global-Frechet we design an algorithm running in time $O(n^3)$. This shows that all three problems (Local-Hausdorff, Local-Frechet, and Global-Frechet) can be solved in cubic time. All these algorithms work over a general metric space such as $(\mathbb{R}^d,L_p)$, but the hidden constant depends on $p$ and (linearly) on $d$. - Cubic conditional lower bound: We provide evidence that in high dimensions cubic time is essentially optimal for all three problems (Local-Hausdorff, Local-Frechet, and Global-Frechet). Specifically, improving the cubic time to $O(n^{3-\epsilon} \textrm{poly}(d))$ for polyline simplification over $(\mathbb{R}^d,L_p)$ for $p = 1$ would violate plausible conjectures. We obtain similar results for all $p \in [1,\infty), p \ne 2$. In total, in high dimensions and over general $L_p$-norms we resolve the complexity of polyline simplification with respect to Local-Hausdorff, Local-Frechet, and Global-Frechet, by providing new algorithms and conditional lower bounds.
We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size $|Σ|$. For the problem of deciding whether the LCS of strings … We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size $|Σ|$. For the problem of deciding whether the LCS of strings $x,y$ has length at least $L$, we obtain a sketch size and streaming space usage of $\mathcal{O}(L^{|Σ| - 1} \log L)$. We also prove matching unconditional lower bounds. As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an $\mathcal{O}(\textrm{min}\{nm, n + m^{\lvert Σ\rvert}\})$-time algorithm for this problem, on strings $x,y$ of length $n,m$, with $n \ge m$. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis.
We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear … We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an $\tilde{\mathcal{O}}((n+m)^7 \log^3(UW))$ algorithm where $n$, $m$, $U$ and $W$ refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is $U^{\Omega(n)}$ and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms.

Commonly Cited References

We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness … We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness threshold. Specifically, maximin share is defined as the minimum utility that an agent can guarantee for herself when asked to partition the set of goods into n bundles such that the remaining (n-1) agents pick their bundles adversarially. An allocation is deemed to be fair if every agent gets a bundle whose valuation is at least her maximin share. Even though maximin shares provide a natural benchmark for fairness, it has its own drawbacks and, in particular, it is not sufficient to rule out unsatisfactory allocations. Motivated by these considerations, in this work we define a stronger notion of fairness, called groupwise maximin share guarantee (GMMS). In GMMS, we require that the maximin share guarantee is achieved not just with respect to the grand bundle, but also among all the subgroups of agents. Hence, this solution concept strengthens MMS and provides an ex-post fairness guarantee. We show that in specific settings, GMMS allocations always exist. We also establish the existence of approximate GMMS allocations under additive valuations, and develop a polynomial-time algorithm to find such allocations. Moreover, we establish a scale of fairness wherein we show that GMMS implies approximate envy freeness. Finally, we empirically demonstrate the existence of GMMS allocations in a large set of randomly generated instances. For the same set of instances, we additionally show that our algorithm achieves an approximation factor better than the established, worst-case bound.
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share, that is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focussed on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee.
We study the problem of allocating $m$ items to $n$ agents subject to maximizing the Nash social welfare (NSW) objective. We write a novel convex programming relaxation for this problem, … We study the problem of allocating $m$ items to $n$ agents subject to maximizing the Nash social welfare (NSW) objective. We write a novel convex programming relaxation for this problem, and we show that a simple randomized rounding algorithm gives a $1/e$ approximation factor of the objective. Our main technical contribution is an extension of Gurvits's lower bound on the coefficient of the square-free monomial of a degree $m$-homogeneous stable polynomial on $m$ variables to all homogeneous polynomials. We use this extension to analyze the expected welfare of the allocation returned by our randomized rounding algorithm.
Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably … Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. To the best of our knowledge, this is the first use of large market assumptions in the fair division literature. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most $2\rho$, using a $\rho$-approximate allocation on input instead of the maximum Nash welfare one.
We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the … We study the problem of allocating a set of indivisible goods among agents with subadditive valuations in a fair and efficient manner. Envy-Freeness up to any good (EFX) is the most compelling notion of fairness in the context of indivisible goods. Although the existence of EFX is not known beyond the simple case of two agents with subadditive valuations, some good approximations of EFX are known to exist, namely 1/2-EFX allocation and EFX allocations with bounded charity. Nash welfare (the geometric mean of agents' valuations) is one of the most commonly used measures of efficiency. In case of additive valuations, an allocation that maximizes Nash welfare also satisfies fairness properties like Envy-Free up to one good (EF1). Although there is substantial work on approximating Nash welfare when agents have additive valuations, very little is known when agents have subadditive valuations. In this paper, we design a polynomial-time algorithm that outputs an allocation that satisfies either of the two approximations of EFX as well as achieves an O(n) approximation to the Nash welfare. Our result also improves the current best-known approximation of O(n log n) and O(m) to Nash welfare when agents have submodular and subadditive valuations, respectively. Furthermore, our technique also gives an O(n) approximation to a family of welfare measures, p-mean of valuations for p in (-\infty, 1], thereby also matching asymptotically the current best approximation ratio for special cases like p = -\infty while also retaining the remarkable fairness properties.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini Sgouritsapp.2658 - 2672Chapter DOI:https://doi.org/10.1137/1.9781611975994.162PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute m goods to n agents in a “fair” manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is “envy-freeness up to any good” (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when n = 3. We show there is always a partition of the set of goods into n + 1 subsets (X1, …, Xn, P) where for i ϵ [n], Xi is the bundle allocated to agent i and the set P is unallocated (or donated to charity) such that we have: (1)envy-freeness up to any good,(2)no agent values P higher than her own bundle, and(3)fewer than n goods go to charity, i.e., |P| < n (typically m ≫ n). Our proof is constructive. When agents have additive valuations and |P| is large (i.e., when |P| is close to n), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximating Nash Social Welfare under Submodular Valuations through (Un)MatchingsJugal Garg, Pooja Kulkarni, and Rucha KulkarniJugal … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximating Nash Social Welfare under Submodular Valuations through (Un)MatchingsJugal Garg, Pooja Kulkarni, and Rucha KulkarniJugal Garg, Pooja Kulkarni, and Rucha Kulkarnipp.2673 - 2687Chapter DOI:https://doi.org/10.1137/1.9781611975994.163PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the problem of approximating maximum Nash social welfare (NSW) when allocating m indivisible items among n asymmetric agents with submodular valuations. The NSW is a well-established notion of fairness and efficiency, defined as the weighted geometric mean of agents' valuations. For special cases of the problem with symmetric agents and additive(-like) valuation functions, approximation algorithms have been designed using approaches customized for these specific settings, and they fail to extend to more general settings. Hence, no approximation algorithm with factor independent of m is known either for asymmetric agents with additive valuations or for symmetric agents beyond additive(-like) valuations. In this paper, we extend our understanding of the NSW problem to far more general settings. Our main contribution is two approximation algorithms for asymmetric agents with additive and submodular valuations respectively. Both algorithms are simple to understand and involve non-trivial modifications of a greedy repeated matchings approach. Allocations of high valued items are done separately by un-matching certain items and re-matching them, by processes that are different in both algorithms. We show that these approaches achieve approximation factors of O(n) and O(n log n) for additive and submodular case respectively, which is independent of the number of items. For additive valuations, our algorithm outputs an allocation that also achieves the fairness property of envy-free up to one item (EF1). Furthermore, we show that the NSW problem under submodular valuations is strictly harder than all currently known settings with an factor of the hardness of approximation, even for constantly many agents. For this case, we provide a different approximation algorithm that achieves a factor of , hence resolving it completely. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several … In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have also been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight or almost tight results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
We consider the task of assigning indivisible goods to a set of agents in a fair manner. Our notion of fairness is Nash social welfare, i.e., the goal is to … We consider the task of assigning indivisible goods to a set of agents in a fair manner. Our notion of fairness is Nash social welfare, i.e., the goal is to maximize the geometric mean of the utilities of the agents. Each good comes in multiple items or copies, and the utility of an agent diminishes as it receives more items of the same good. The utility of a bundle of items for an agent is the sum of the utilities of the items in the bundle. Each agent has a utility cap beyond which he does not value additional items. We give a polynomial time approximation algorithm that maximizes Nash social welfare up to a factor of $e^{1/{e}} \approx 1.445$. The computed allocation is Pareto-optimal and approximates envy-freeness up to one item up to a factor of $2 + \eps$
We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. … We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. An MMS allocation provides each agent a bundle worth at least her maximin share. While it is known that such an allocation need not exist [5, 7], a series of remarkable work [1-3, 6, 7] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times her maximin share. More recently, [4] showed the existence of 3/4 MMS allocations and a PTAS to find a 3/4 - ε MMS allocation. Most of the previous works utilize intricate algorithms and require agents' approximate MMS values, which are computationally expensive to obtain.
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most … Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a (φ−1)-approximation of envy-freeness up to any good (EFX) and a 2/φ+2 -approximation of groupwise maximin share fairness (GMMS), where φ is the golden ratio. The best known approximation factor, in polynomial time, for either one of these fairness notions prior to this work was 1/2. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a 2/3-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve further the guarantees for GMMS or PMMS.Finally, we show that GMMS—and thus PMMS and EFX—allocations always exist when the number of goods does not exceed the number of agents by more than two.
We study the problem of distributing a set of indivisible items among agents with additive valuations in a $\mathit{fair}$ manner. The fairness notion under consideration is Envy-freeness up to any … We study the problem of distributing a set of indivisible items among agents with additive valuations in a $\mathit{fair}$ manner. The fairness notion under consideration is Envy-freeness up to any item (EFX). Despite significant efforts by many researchers for several years, the existence of EFX allocations has not been settled beyond the simple case of two agents. In this paper, we show constructively that an EFX allocation always exists for three agents. Furthermore, we falsify the conjecture by Caragiannis et al. by showing an instance with three agents for which there is a partial EFX allocation (some items are not allocated) with higher Nash welfare than that of any complete EFX allocation.
The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
We study the problem of allocating indivisible goods among agents that have an identical subadditive valuation over the goods. The extent of fair- ness and efficiency of allocations is measured … We study the problem of allocating indivisible goods among agents that have an identical subadditive valuation over the goods. The extent of fair- ness and efficiency of allocations is measured by the generalized means of the values that the alloca- tions generate among the agents. Parameterized by an exponent term p, generalized-mean welfares en- compass multiple well-studied objectives, such as social welfare, Nash social welfare, and egalitarian welfare. We establish that, under identical subadditive valu- ations and in the demand oracle model, one can efficiently find a single allocation that approximates the optimal generalized-mean welfare—to within a factor of 40—uniformly for all p ∈ (−∞,1]. Hence, by way of a constant-factor approximation algorithm, we obtain novel results for maximizing Nash social welfare and egalitarian welfare for identical subadditive valuations.
In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While … In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While prior work has studied (approximate) equitability in isolation, we consider equitability in conjunction with other well-studied notions of fairness and economic efficiency. We show that the Leximin algorithm produces an allocation that satisfies equitability up to any good and Pareto optimality. We also give a novel algorithm that guarantees Pareto optimality and equitability up to one good in pseudopolynomial time. Our experiments on real-world preference data reveal that approximate envy-freeness, approximate equitability, and Pareto optimality can often be achieved simultaneously.
We develop polynomial-time algorithms for the fair and efficient allocation of indivisible goods among $n$ agents that have subadditive valuations over the goods. We first consider the Nash social welfare … We develop polynomial-time algorithms for the fair and efficient allocation of indivisible goods among $n$ agents that have subadditive valuations over the goods. We first consider the Nash social welfare as our objective and design a polynomial-time algorithm that, in the value oracle model, finds an $8n$-approximation to the Nash optimal allocation. Subadditive valuations include XOS (fractionally subadditive) and submodular valuations as special cases. Our result, even for the special case of submodular valuations, improves upon the previously best known $O(n \log n)$-approximation ratio of Garg et al. (2020). More generally, we study maximization of $p$-mean welfare. The $p$-mean welfare is parameterized by an exponent term $p \in (-\infty, 1]$ and encompasses a range of welfare functions, such as social welfare ($p = 1$), Nash social welfare ($p \to 0$), and egalitarian welfare ($p \to -\infty$). We give an algorithm that, for subadditive valuations and any given $p \in (-\infty, 1]$, computes (in the value oracle model and in polynomial time) an allocation with $p$-mean welfare at least $\frac{1}{8n}$ times the optimal. Further, we show that our approximation guarantees are essentially tight for XOS and, hence, subadditive valuations. We adapt a result of Dobzinski et al. (2010) to show that, under XOS valuations, an $O \left(n^{1-\varepsilon} \right)$ approximation for the $p$-mean welfare for any $p \in (-\infty,1]$ (including the Nash social welfare) requires exponentially many value queries; here, $\varepsilon>0$ is any fixed constant.
We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and … We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and envy-freeness up to any good (EFX). We establish that an MNW allocation is always EFX as long as there are at most two possible values for the goods, whereas this implication is no longer true for three or more distinct values. As a notable consequence, this proves the existence of EFX allocations for these restricted valuation functions. While the efficient computation of an MNW allocation for two possible values remains an open problem, we present a novel algorithm for directly constructing EFX allocations in this setting. Finally, we study the question of whether an MNW allocation implies any EFX guarantee for general additive valuation functions under a natural new interpretation of approximate EFX allocations.
We present a 380-approximation algorithm for the Nash Social Welfare problem with submodular valuations. Our algorithm builds on and extends a recent constant-factor approximation for Rado valuations [15]. We present a 380-approximation algorithm for the Nash Social Welfare problem with submodular valuations. Our algorithm builds on and extends a recent constant-factor approximation for Rado valuations [15].
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a … We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
We consider the problem of maximizing the Nash social welfare when allocating a set G of indivisible goods to a set N of agents. We study instances, in which all … We consider the problem of maximizing the Nash social welfare when allocating a set G of indivisible goods to a set N of agents. We study instances, in which all agents have 2-value additive valuations: The value of every agent for every good is either p or q, where p and q are integers and p2. In terms of approximation, we present positive and negative results for general p and q. We show that our algorithm obtains an approximation ratio of at most 1.0345. Moreover, we prove that the problem is APX-hard, with a lower bound of 1.000015 achieved at p/q = 4/5.
We develop polynomial-time algorithms for the fair and efficient allocation of indivisible goods among $n$ agents that have subadditive valuations over the goods. We first consider the Nash social welfare … We develop polynomial-time algorithms for the fair and efficient allocation of indivisible goods among $n$ agents that have subadditive valuations over the goods. We first consider the Nash social welfare as our objective and design a polynomial-time algorithm that, in the value oracle model, finds an $8n$-approximation to the Nash optimal allocation. Subadditive valuations include XOS (fractionally subadditive) and submodular valuations as special cases. Our result, even for the special case of submodular valuations, improves upon the previously best known $O(n \log n)$-approximation ratio of Garg et al. (2020). More generally, we study maximization of $p$-mean welfare. The $p$-mean welfare is parameterized by an exponent term $p \in (-\infty, 1]$ and encompasses a range of welfare functions, such as social welfare ($p = 1$), Nash social welfare ($p \to 0$), and egalitarian welfare ($p \to -\infty$). We give an algorithm that, for subadditive valuations and any given $p \in (-\infty, 1]$, computes (in the value oracle model and in polynomial time) an allocation with $p$-mean welfare at least $\frac{1}{8n}$ times the optimal. Further, we show that our approximation guarantees are essentially tight for XOS and, hence, subadditive valuations. We adapt a result of Dobzinski et al. (2010) to show that, under XOS valuations, an $O \left(n^{1-\varepsilon} \right)$ approximation for the $p$-mean welfare for any $p \in (-\infty,1]$ (including the Nash social welfare) requires exponentially many value queries; here, $\varepsilon>0$ is any fixed constant.
We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: -- A new integer program … We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: -- A new integer program for the NSW maximization problem whose fractional relaxation has a bounded integrality gap. In contrast, the natural integer program has an unbounded integrality gap. -- An improved, and tight, factor 2 analysis of the algorithm of [7]; in turn showing that the integrality gap of the above relaxation is at most 2. The approximation factor shown by [7] was $2e^{1/e} \approx 2.89$. -- A lower bound of $e^{1/e}\approx 1.44$ on the integrality gap of this relaxation. -- New convex programs for natural generalizations of linear Fisher markets and proofs that these markets admit rational equilibria. These results were obtained by establishing connections between previously known disparate results, and they help uncover their mathematical underpinnings. We show a formal connection between the convex programs of Eisenberg and Gale and that of Shmyrev, namely that their duals are equivalent up to a change of variables. Both programs capture equilibria of linear Fisher markets. By adding suitable constraints to Shmyrev's program, we obtain a convex program that captures equilibria of the spending-restricted market model defined by [7] in the context of the NSW maximization problem. Further, adding certain integral constraints to this program we get the integer program for the NSW mentioned above. The basic tool we use is convex programming duality. In the special case of convex programs with linear constraints (but convex objectives), we show a particularly simple way of obtaining dual programs, putting it almost at par with linear program duality. This simple way of finding duals has been used subsequently for many other applications.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations.
Regular expressions constitute fundamental notion in formal language theory and are frequently used in computer science to define search patterns. A classic algorithm for these problems constructs and simulates non-deterministic … Regular expressions constitute fundamental notion in formal language theory and are frequently used in computer science to define search patterns. A classic algorithm for these problems constructs and simulates non-deterministic finite automaton corresponding to the expression, resulting in an $O(mn)$ running time (where $m$ is the length of the pattern and $n$ is the length of the text). This running time can be improved slightly (by polylogarithmic factor), but no significantly faster solutions are known. At the same time, much faster algorithms exist for various special cases of regular expressions, including dictionary matching, wildcard matching, subset matching, word break problem etc. In this paper, we show that the complexity of regular expression matching can be characterized based on its {\em depth} (when interpreted as formula). Our results hold for expressions involving concatenation, OR, Kleene star and Kleene plus. For regular expressions of depth two (involving any combination of the above operators), we show the following dichotomy: matching and membership testing can be solved in near-linear time, except for concatenations of stars, which cannot be solved in strongly sub-quadratic time assuming the Strong Exponential Time Hypothesis (SETH). For regular expressions of depth three the picture is more complex. Nevertheless, we show that all problems can either be solved in strongly sub-quadratic time, or cannot be solved in strongly sub-quadratic time assuming SETH. An intriguing special case of membership testing involves regular expressions of the form a star of an OR of concatenations, e.g., $[a|ab|bc]^*$. This corresponds to the so-called {\em word break} problem, for which dynamic programming algorithm with runtime of (roughly) $O(n\sqrt{m})$ is known. We show that the latter bound is not tight and improve the runtime to $O(nm^{0.44\ldots})$.
We show that in the document exchange problem, where Alice holds $x \in \{0,1\}^n$ and Bob holds $y \in \{0,1\}^n$, Alice can send Bob a message of size $O(K(\log^2 K+\log … We show that in the document exchange problem, where Alice holds $x \in \{0,1\}^n$ and Bob holds $y \in \{0,1\}^n$, Alice can send Bob a message of size $O(K(\log^2 K+\log n))$ bits such that Bob can recover $x$ using the message and his input $y$ if the edit distance between $x$ and $y$ is no more than $K$, and output "error" otherwise. Both the encoding and decoding can be done in time $\tilde{O}(n+\mathsf{poly}(K))$. This result significantly improves the previous communication bounds under polynomial encoding/decoding time. We also show that in the referee model, where Alice and Bob hold $x$ and $y$ respectively, they can compute sketches of $x$ and $y$ of sizes $\mathsf{poly}(K \log n)$ bits (the encoding), and send to the referee, who can then compute the edit distance between $x$ and $y$ together with all the edit operations if the edit distance is no more than $K$, and output "error" otherwise (the decoding). To the best of our knowledge, this is the first result for sketching edit distance using $\mathsf{poly}(K \log n)$ bits. Moreover, the encoding phase of our sketching algorithm can be performed by scanning the input string in one pass. Thus our sketching algorithm also implies the first streaming algorithm for computing edit distance and all the edits exactly using $\mathsf{poly}(K \log n)$ bits of space.
The edit distance is a way of quantifying how similar two strings are to one another by counting the minimum number of character insertions, deletions, and substitutions required to transform … The edit distance is a way of quantifying how similar two strings are to one another by counting the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. In this paper we study the computational problem of computing the edit distance between a pair of strings where their distance is bounded by a parameter $k\ll n$. We present two streaming algorithms for computing edit distance: One runs in time $O(n+k^2)$ and the other $n+O(k^3)$. By writing $n+O(k^3)$ we want to emphasize that the number of operations per an input symbol is a small constant. In particular, the running time does not depend on the alphabet size, and the algorithm should be easy to implement. Previously a streaming algorithm with running time $O(n+k^4)$ was given in the paper by the current authors (STOC'16). The best off-line algorithm runs in time $O(n+k^2)$ (Landau et al., 1998) which is known to be optimal under the Strong Exponential Time Hypothesis.
The constrained LCS problem asks one to find a longest common subsequence of two input strings $A$ and $B$ with some constraints. The STR-IC-LCS problem is a variant of the … The constrained LCS problem asks one to find a longest common subsequence of two input strings $A$ and $B$ with some constraints. The STR-IC-LCS problem is a variant of the constrained LCS problem, where the solution must include a given constraint string $C$ as a substring. Given two strings $A$ and $B$ of respective lengths $M$ and $N$, and a constraint string $C$ of length at most $\min\{M, N\}$, the best known algorithm for the STR-IC-LCS problem, proposed by Deorowicz~({\em Inf. Process. Lett.}, 11:423--426, 2012), runs in $O(MN)$ time. In this work, we present an $O(mN + nM)$-time solution to the STR-IC-LCS problem, where $m$ and $n$ denote the sizes of the run-length encodings of $A$ and $B$, respectively. Since $m \leq M$ and $n \leq N$ always hold, our algorithm is always as fast as Deorowicz's algorithm, and is faster when input strings are compressible via RLE.
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, … We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
Fair division is a fundamental problem in various multi-agent settings, where the goal is to divide a set of resources among agents in a fair manner. We study the case … Fair division is a fundamental problem in various multi-agent settings, where the goal is to divide a set of resources among agents in a fair manner. We study the case where m indivisible items need to be divided among n agents with additive valuations using the popular fairness notion of maximin share (MMS). An MMS allocation provides each agent a bundle worth at least her maximin share. While it is known that such an allocation need not exist, a series of work provided approximation algorithms for a 2/3-MMS allocation in which each agent receives a bundle worth at least 2/3 times her maximin share. More recently, Ghodsi et al. [EC'2018] showed the existence of a 3/4-MMS allocation and a PTAS to find a (3/4-\epsilon)-MMS allocation for an \epsilon > 0. Most of the previous works utilize intricate algorithms and require agents' approximate MMS values, which are computationally expensive to obtain. In this paper, we develop a new approach that gives a simple algorithm for showing the existence of a 3/4-MMS allocation. Furthermore, our approach is powerful enough to be easily extended in two directions: First, we get a strongly polynomial-time algorithm to find a 3/4-MMS allocation, where we do not need to approximate the MMS values at all. Second, we show that there always exists a (3/4 + 1/(12n))-MMS allocation, improving the best previous factor. This improves the approximation guarantee, most notably for small n. We note that 3/4 was the best factor known for n> 4.
The Frechet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to … The Frechet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Frechet distance of two polygonal curves with n vertices has a runtime of O(n^2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n^2 / log n), but no matching lower bounds are known, not even under reasonable complexity theoretic assumptions. To obtain a conditional lower bound, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O*((2-delta)^N) algorithm for CNF-SAT for any delta > 0. Under this assumption we show that the Frechet distance cannot be computed in strongly subquadratic time, i.e., in time O(n^{2-delta}) for any delta > 0. This means that finding faster algorithms for the Frechet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely. Our result holds for both the continuous and the discrete Frechet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1.001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves).
We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share , which is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focused on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations, a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee. Furthermore, we initiate the study of approximate maximin fair division under submodular valuations . Specifically, we show that when the valuations of the agents are nonnegative , monotone , and submodular, then a 0.21-approximate maximin fair allocation is guaranteed to exist. In fact, we show that such an allocation can be efficiently found by using a simple round-robin algorithm. A technical contribution of the article is to analyze the performance of this combinatorial algorithm by employing the concept of multilinear extensions .
We study an online model of fair division designed to capture features of a real world charity problem. We consider two simple mechanisms for this model in which agents simply … We study an online model of fair division designed to capture features of a real world charity problem. We consider two simple mechanisms for this model in which agents simply declare what items they like. We analyse several axiomatic properties of these mechanisms like strategy-proofness and envy-freeness. Finally, we perform a competitive analysis and compute the price of anarchy.
We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social … We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APX-hard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special cases. First, we show that a simple, greedy algorithm provides a 1.061-approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NP-hard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with … We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with additive valuations (Eisenberg and Gale 1959; Orlin 2010), such equilibria, in general, assign goods fractionally to agents. Hence, Fisher markets are not directly applicable in the context of indivisible goods. In this work we show that one can always bypass this hurdle and, up to a bounded change in agents’ budgets, obtain markets that admit an integral equilibrium. We refer to such markets as pure markets and show that, for any given Fisher market (with additive valuations), one can efficiently compute a “near-by,” pure market with an accompanying integral equilibrium.Our work on pure markets leads to novel algorithmic results for fair division of indivisible goods. Prior work in discrete fair division has shown that, under additive valuations, there always exist allocations that simultaneously achieve the seemingly incompatible properties of fairness and efficiency (Caragiannis et al. 2016); here fairness refers to envyfreeness up to one good (EF1) and efficiency corresponds to Pareto efficiency. However, polynomial-time algorithms are not known for finding such allocations. Considering relaxations of proportionality and EF1, respectively, as our notions of fairness, we show that fair and Pareto efficient allocations can be computed in strongly polynomial time.