Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (70)

This work examines the Conditional Approval Framework for elections involving multiple interdependent issues, specifically focusing on the Conditional Minisum Approval Voting Rule. We first conduct a detailed analysis of the … This work examines the Conditional Approval Framework for elections involving multiple interdependent issues, specifically focusing on the Conditional Minisum Approval Voting Rule. We first conduct a detailed analysis of the computational complexity of this rule, demonstrating that no approach can significantly outperform the brute-force algorithm under common computational complexity assumptions and various natural input restrictions. In response, we propose two practical restrictions (the first in the literature) that make the problem computationally tractable and show that these restrictions are essentially tight. Overall, this work provides a clear picture of the tractability landscape of the problem, contributing to a comprehensive understanding of the complications introduced by conditional ballots and indicating that conditional approval voting can be applied in practice, albeit under specific conditions.
The approximate uniform sampling of graphs with a given degree sequence is a well-known, extensively studied problem in theoretical computer science and has significant applications, e.g., in the analysis of … The approximate uniform sampling of graphs with a given degree sequence is a well-known, extensively studied problem in theoretical computer science and has significant applications, e.g., in the analysis of social networks. In this work we study an extension of the problem, where degree intervals are specified rather than a single degree sequence. We are interested in sampling and counting graphs whose degree sequences satisfy the degree interval constraints. A natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed. In this work, we provide the first fully polynomial almost uniform sampler (FPAUS) as well as the first fully polynomial randomized approximation scheme (FPRAS) for sampling and counting, respectively, graphs with near-regular degree intervals, in which every node $i$ has a degree from an interval not too far away from a given $d \in \N$. In order to design our FPAUS, we rely on various state-of-the-art tools from Markov chain theory and combinatorics. In particular, we provide the first non-trivial algorithmic application of a breakthrough result of Liebenau and Wormald (2017) regarding an asymptotic formula for the number of graphs with a given near-regular degree sequence. Furthermore, we also make use of the recent breakthrough of Anari et al. (2019) on sampling a base of a matroid under a strongly log-concave probability distribution. As a more direct approach, we also study a natural Markov chain recently introduced by Rechner, Strowick and M\uller-Hannemann (2018), based on three simple local operations: Switches, hinge flips, and additions/deletions of a single edge. We obtain the first theoretical results for this Markov chain by showing it is rapidly mixing for the case of near-regular degree intervals of size at most one.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide nontrivial fairness guarantees. Recently, Amanatidis et al. [Amanatidis G, Birmpas G, Fusco F, Lazos P, Leonardi S, Reiffenhäuser R (2023) Allocating indivisible goods to strategic agents: Pure Nash equilibria and fairness. Math. Oper. Res., ePub ahead of print November 30, https://doi.org/10.1287/moor.2022.0058 ] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria, and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents’ true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria, and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents’ true valuation functions. Furthermore, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist. Funding: This work was supported by the Horizon 2020 European Research Council Advanced Grant “AMDROMA: Algorithmic and Mechanism Design Research in Online Markets” [Grant 788893], the Ministero dell’Università e della Ricerca Research project of national interest (PRIN) “ALGADIMAR: Algorithms, Games, and Digital Markets,” the Nederlandse Organisatie voor Wetenschappelijk Onderzoek Veni Project “Algorithmic Fair Division in Dynamic, Socially Constrained Environments” [Grant VI.Veni.192.153], and the National Recovery and Resilience Plan Greece 2.0 funded by the European Union under the NextGenerationEU Program [Grant MIS 5154714].
The Pandora's Box problem models the search for the best alternative when evaluation is costly. In its simplest variant, a decision maker is presented with $n$ boxes, each associated with … The Pandora's Box problem models the search for the best alternative when evaluation is costly. In its simplest variant, a decision maker is presented with $n$ boxes, each associated with a cost of inspection and a distribution over the reward hidden within. The decision maker inspects a subset of these boxes one after the other, in a possibly adaptive ordering, and obtains as utility the difference between the largest reward uncovered and the sum of the inspection costs. While this classic version of the problem is well understood (Weitzman 1979), recent years have seen a flourishing of the literature on variants of the problem. In this paper, we introduce a general framework -- the Pandora's Box Over Time problem -- that captures a wide range of variants where time plays a role, e.g., as it might constrain the schedules of exploration and influence both costs and rewards. In the Pandora's Box Over Time problem, each box is characterized by time-dependent rewards and costs, and inspecting it might require a box-specific processing time. Moreover, once a box is inspected, its reward may deteriorate over time, possibly differently for each box. Our main result is an efficient $21.3$-approximation to the optimal strategy, which is NP-hard to compute in general. We further obtain improved results for the natural special cases where boxes have no processing time, or when costs and reward distributions do not depend on time (but rewards may deteriorate after inspecting).
We study the problem of allocating a set of indivisible goods to a set of agents with additive valuation functions, aiming to achieve approximate envy-freeness up to any good ($\alpha$-EFX). … We study the problem of allocating a set of indivisible goods to a set of agents with additive valuation functions, aiming to achieve approximate envy-freeness up to any good ($\alpha$-EFX). The state-of-the-art results on the problem include that (exact) EFX allocations exist when (a) there are at most three agents, or (b) the agents' valuation functions can take at most two values, or (c) the agents' valuation functions can be represented via a graph. For $\alpha$-EFX, it is known that a $0.618$-EFX allocation exists for any number of agents with additive valuation functions. In this paper, we show that $2/3$-EFX allocations exist when (a) there are at most \emph{seven agents}, (b) the agents' valuation functions can take at most \emph{three values}, or (c) the agents' valuation functions can be represented via a \emph{multigraph}. Our results can be interpreted in two ways. First, by relaxing the notion of EFX to $2/3$-EFX, we obtain existence results for strict generalizations of the settings for which exact EFX allocations are known to exist. Secondly, by imposing restrictions on the setting, we manage to beat the barrier of $0.618$ and achieve an approximation guarantee of $2/3$. Therefore, our results push the \emph{frontier} of existence and computation of approximate EFX allocations, and provide insights into the challenges of settling the existence of exact EFX allocations.
We consider a voting problem in which a set of agents have metric preferences over a set of alternatives, and are also partitioned into disjoint groups. Given information about the … We consider a voting problem in which a set of agents have metric preferences over a set of alternatives, and are also partitioned into disjoint groups. Given information about the preferences of the agents and their groups, our goal is to decide an alternative to approximately minimize an objective function that takes the groups of agents into account. We consider two natural group-fair objectives known as Max-of-Avg and Avg-of-Max which are different combinations of the max and the average cost in and out of the groups. We show tight bounds on the best possible distortion that can be achieved by various classes of mechanisms depending on the amount of information they have access to. In particular, we consider group-oblivious full-information mechanisms that do not know the groups but have access to the exact distances between agents and alternatives in the metric space, group-oblivious ordinal-information mechanisms that again do not know the groups but are given the ordinal preferences of the agents, and group-aware mechanisms that have full knowledge of the structure of the agent groups and also ordinal information about the metric space.
.In most social choice settings, the participating agents express their preferences over the different alternatives in the form of linear orderings. While this clearly simplifies preference elicitation, it inevitably leads … .In most social choice settings, the participating agents express their preferences over the different alternatives in the form of linear orderings. While this clearly simplifies preference elicitation, it inevitably leads to poor performance with respect to optimizing a cardinal objective, such as the social welfare, since the values of the agents remain virtually unknown. This loss in performance because of lack of information is measured by the notion of distortion. A recent array of works put forward the agenda of designing mechanisms that learn the values of the agents for a small number of alternatives via queries, and use this limited extra information to make better-informed decisions, thus improving distortion. Following this agenda, in this work we focus on a class of combinatorial problems that includes most well-known matching problems and several of their generalizations. For problems such as One-Sided Matching, Two-Sided Matching, General Graph Matching, and Short Cycle Packing, we design two-query mechanisms that achieve the best-possible worst-case distortion in terms of social welfare, and outperform the best-possible expected distortion achieved by randomized ordinal mechanisms. Our results extend to problems like \(k\)-Constrained Resource Allocation, General Graph \(k\)-Matching, and \(k\)-Clique Packing, when \(k\) is restricted to be any constant.Keywordsdistortionqueriesmatchingsocial choicegraph problemsMSC codes68Q2568W2568W4091B1291B1491B68
Constrained maximization of submodular functions poses a central problem in combinatorial optimization. In many realistic scenarios, a number of agents need to maximize multiple submodular objectives over the same ground … Constrained maximization of submodular functions poses a central problem in combinatorial optimization. In many realistic scenarios, a number of agents need to maximize multiple submodular objectives over the same ground set. We study such a setting, where the different solutions must be disjoint, and thus, questions of fairness arise. Inspired from the fair division literature, we suggest a simple round-robin protocol, where agents are allowed to build their solutions one item at a time by taking turns. Unlike what is typical in fair division, however, the prime goal here is to provide a fair algorithmic environment; each agent is allowed to use any algorithm for constructing their respective solutions. We show that just by following simple greedy policies, agents have solid guarantees for both monotone and non-monotone objectives, and for combinatorial constraints as general as $p$-systems (which capture cardinality and matroid intersection constraints). In the monotone case, our results include approximate EF1-type guarantees and their implications in fair division may be of independent interest. Further, although following a greedy policy may not be optimal in general, we show that consistently performing better than that is computationally hard.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
Allocating resources to individuals in a fair manner has been a topic of interest since ancient times, with most of the early mathematical work on the problem focusing on resources … Allocating resources to individuals in a fair manner has been a topic of interest since ancient times, with most of the early mathematical work on the problem focusing on resources that are infinitely divisible. Over the last decade, there has been a surge of papers studying computational questions regarding the indivisible case, for which exact fairness notions such as envy-freeness and proportionality are hard to satisfy. One main theme in the recent research agenda is to investigate the extent to which their relaxations, like maximin share fairness (MMS) and envy-freeness up to any good (EFX), can be achieved. In this survey, we present a comprehensive review of the recent progress made in the related literature by highlighting different ways to relax fairness notions, common algorithm design techniques, and the most interesting questions for future research.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
Budget-feasible procurement has been a major paradigm in mechanism design since its introduction by Singer (2010). An auctioneer (buyer) with a strict budget constraint is interested in buying goods or … Budget-feasible procurement has been a major paradigm in mechanism design since its introduction by Singer (2010). An auctioneer (buyer) with a strict budget constraint is interested in buying goods or services from a group of strategic agents (sellers). In many scenarios it makes sense to allow the auctioneer to only partially buy what an agent offers, e.g., an agent might have multiple copies of an item to sell, they might offer multiple levels of a service, or they may be available to perform a task for any fraction of a specified time interval. Nevertheless, the focus of the related literature has been on settings where each agent's services are either fully acquired or not at all. The main reason for this, is that in settings with partial allocations like the ones mentioned, there are strong inapproximability results. Under the mild assumption of being able to afford each agent entirely, we are able to circumvent such results in this work. We design a polynomial-time, deterministic, truthful, budget-feasible $(2+\sqrt{3})$-approximation mechanism for the setting where each agent offers multiple levels of service and the auctioneer has a discrete separable concave valuation function. We then use this result to design a deterministic, truthful and budget-feasible mechanism for the setting where any fraction of a service can be acquired and the auctioneer's valuation function is separable concave (i.e., the sum of concave functions). The approximation ratio of this mechanism depends on how `nice' the concave functions are, and is $O(1)$ for valuation functions that are sums of $O(1)$-regular functions (e.g., functions like $\log(1+x)$). For the special case of a linear valuation function, we improve the best known approximation ratio for the problem from $1+\phi$ (by Klumper & Sch\"afer (2022)) to $2$. This establishes a separation between this setting and its indivisible counterpart.
We study elections where voters are faced with the challenge of expressing preferences over an extreme number of issues under consideration. This is largely motivated by emerging blockchain governance systems, … We study elections where voters are faced with the challenge of expressing preferences over an extreme number of issues under consideration. This is largely motivated by emerging blockchain governance systems, which include voters with different weights and a massive number of community generated proposals. In such scenarios, it is natural to expect that voters will have incomplete preferences, as they may only be able to evaluate or be confident about a very small proportion of the alternatives. As a result, the election outcome may be significantly affected, leading to suboptimal decisions. Our central inquiry revolves around whether delegation of ballots to proxies possessing greater expertise or a more comprehensive understanding of the voters' preferences can lead to outcomes with higher legitimacy and enhanced voters' satisfaction in elections where voters submit incomplete preferences. To explore its aspects, we introduce the following model: potential proxies advertise their ballots over multiple issues, and each voter either delegates to a seemingly attractive proxy or casts a ballot directly. We identify necessary and sufficient conditions that could lead to a socially better outcome by leveraging the participation of proxies. We accompany our theoretical findings with experiments on instances derived from real datasets. Overall, our results enhance the understanding of the power of delegation towards improving election outcomes.
Allocating resources to individuals in a fair manner has been a topic of interest since the ancient times, with most of the early rigorous mathematical work on the problem focusing … Allocating resources to individuals in a fair manner has been a topic of interest since the ancient times, with most of the early rigorous mathematical work on the problem focusing on infinitely divisible resources. Recently, there has been a surge of papers studying computational questions regarding various different notions of fairness for the indivisible case, like maximin share fairness (MMS) and envy-freeness up to any good (EFX). We survey the most important results in the discrete fair division literature, focusing on the case of additive valuation functions and paying particular attention to the progress made in the last 10 years.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83-approximation and runs in O(n log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms. The versatility of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a (9 + ε)-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
We consider the One-Sided Matching problem, where n agents have preferences over n items, and these preferences are induced by underlying cardinal valuation functions. The goal is to match every … We consider the One-Sided Matching problem, where n agents have preferences over n items, and these preferences are induced by underlying cardinal valuation functions. The goal is to match every agent to a single item so as to maximize the social welfare. Most of the related literature, however, assumes that the values of the agents are not a priori known, and only access to the ordinal preferences of the agents over the items is provided. Consequently, this incomplete information leads to loss of efficiency, which is measured by the notion of distortion. In this paper, we further assume that the agents can answer a small number of queries, allowing us partial access to their values. We study the interplay between elicited cardinal information (measured by the number of queries per agent) and distortion for One-Sided Matching, as well as a wide range of well-studied related problems. Qualitatively, our results show that with a limited number of queries, it is possible to obtain significant improvements over the classic setting, where only access to ordinal information is given.
The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of … The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible, and O(1)-approximation mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Prior to our work, the only O(1)-approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). The fact that in our mechanism the agents are not ordered according to their marginal value per cost allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present, for example, at most k agents can be selected. We obtain O(p)-approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a p-system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about nontrivial approximation guarantees in polynomial time, our results are, asymptotically, the best possible.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 14 July 2020Accepted: 30 August 2021Published online: 06 January 2022Keywordsgraph sampling, switch Markov … Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 14 July 2020Accepted: 30 August 2021Published online: 06 January 2022Keywordsgraph sampling, switch Markov chain, joint degree matrixAMS Subject Headings68Q25, 05C80, 68R10Publication DataISSN (print): 0895-4801ISSN (online): 1095-7146Publisher: Society for Industrial and Applied MathematicsCODEN: sjdmec
In most social choice settings, the participating agents express their preferences over the different alternatives in the form of linear orderings. While this clearly simplifies preference elicitation, it inevitably leads … In most social choice settings, the participating agents express their preferences over the different alternatives in the form of linear orderings. While this clearly simplifies preference elicitation, it inevitably leads to poor performance with respect to optimizing a cardinal objective, such as the social welfare, since the values of the agents remain virtually unknown. This loss in performance because of lack of information is measured by the notion of distortion. A recent array of works put forward the agenda of designing mechanisms that learn the values of the agents for a small number of alternatives via queries, and use this limited extra information to make better-informed decisions, thus improving distortion. Following this agenda, in this work we focus on a class of combinatorial problems that includes most well-known matching problems and several of their generalizations, such as One-Sided Matching, Two-Sided Matching, General Graph Matching, and $k$-Constrained Resource Allocation. We design two-query mechanisms that achieve the best-possible worst-case distortion in terms of social welfare, and outperform the best-possible expected distortion achieved by randomized ordinal mechanisms.
Allocating resources to individuals in a fair manner has been a topic of interest since ancient times, with most of the early mathematical work on the problem focusing on resources … Allocating resources to individuals in a fair manner has been a topic of interest since ancient times, with most of the early mathematical work on the problem focusing on resources that are infinitely divisible. Over the last decade, there has been a surge of papers studying computational questions regarding the indivisible case, for which exact fairness notions such as envy-freeness and proportionality are hard to satisfy. One main theme in the recent research agenda is to investigate the extent to which their relaxations, like maximin share fairness (MMS) and envy-freeness up to any good (EFX), can be achieved. In this survey, we present a comprehensive review of the recent progress made in the related literature by highlighting different ways to relax fairness notions, common algorithm design techniques, and the most interesting questions for future research.
Motivated by governance models adopted in blockchain applications, we study the problem of selecting appropriate system updates in a decentralised way. Contrary to most existing voting approaches, we use the … Motivated by governance models adopted in blockchain applications, we study the problem of selecting appropriate system updates in a decentralised way. Contrary to most existing voting approaches, we use the input of a set of motivated experts of varying levels of expertise. In particular, we develop an approval voting inspired selection mechanism through which the experts approve or disapprove the different updates according to their perception of the quality of each alternative. Given their opinions, and weighted by their expertise level, a single update is then implemented and evaluated, and the experts receive rewards based on their choices. We show that this mechanism always has approximate pure Nash equilibria and that these achieve a constant factor approximation with respect to the quality benchmark of the optimal alternative. Finally, we study the repeated version of the problem, where the weights of the experts are adjusted after each update, according to their performance. Under mild assumptions about the weights, the extension of our mechanism still has approximate pure Nash equilibria in this setting.
Allocating resources to individuals in a fair manner has been a topic of interest since the ancient times, with most of the early rigorous mathematical work on the problem focusing … Allocating resources to individuals in a fair manner has been a topic of interest since the ancient times, with most of the early rigorous mathematical work on the problem focusing on infinitely divisible resources. Recently, there has been a surge of papers studying computational questions regarding various different notions of fairness for the indivisible case, like maximin share fairness (MMS) and envy-freeness up to any good (EFX). We survey the most important results in the discrete fair division literature, focusing on the case of additive valuation functions and paying particular attention to the progress made in the last 10 years.
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive … The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive complexity}, capturing the number of sequential rounds of parallel computation needed. In this work we obtain the first \emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with \emph{near-optimal} $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first \emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms' applicability on real-world datasets.
We consider the one-sided matching problem, where n agents have preferences over n items, and these preferences are induced by underlying cardinal valuation functions. The goal is to match every … We consider the one-sided matching problem, where n agents have preferences over n items, and these preferences are induced by underlying cardinal valuation functions. The goal is to match every agent to a single item so as to maximize the social welfare. Most of the related literature, however, assumes that the values of the agents are not a priori known, and only access to the ordinal preferences of the agents over the items is provided. Consequently, this incomplete information leads to loss of efficiency, which is measured by the notion of distortion. In this paper, we further assume that the agents can answer a small number of queries, allowing us partial access to their values. We study the interplay between elicited cardinal information (measured by the number of queries per agent) and distortion for one-sided matching, as well as a wide range of well-studied related problems. Qualitatively, our results show that with a limited number of queries, it is possible to obtain significant improvements over the classic setting, where only access to ordinal information is given.
We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation functions. We assume no monetary transfers and, therefore, a … We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation functions. We assume no monetary transfers and, therefore, a mechanism in our setting is an algorithm that takes as input the reported -- rather than the true -- values of the agents. Our main goal is to explore whether there exist mechanisms that have pure Nash equilibria for every instance and, at the same time, provide fairness guarantees for the allocations that correspond to these equilibria. We focus on two relaxations of envy-freeness, namely envy-freeness up to one good (EF1), and envy-freeness up to any good (EFX), and we positively answer the above question. In particular, we study two algorithms that are known to produce such allocations in the non-strategic setting: Round-Robin (EF1 allocations for any number of agents) and a cut-and-choose algorithm of Plaut and Roughgarden [SIAM Journal of Discrete Mathematics, 2020] (EFX allocations for two agents). For Round-Robin we show that all of its pure Nash equilibria induce allocations that are EF1 with respect to the underlying true values, while for the algorithm of Plaut and Roughgarden we show that the corresponding allocations not only are EFX but also satisfy maximin share fairness, something that is not true for this algorithm in the non-strategic setting! Further, we show that a weaker version of the latter result holds for any mechanism for two agents that always has pure Nash equilibria which all induce EFX allocations.
The approximate uniform sampling of graphs with a given degree sequence is a well-known, extensively studied problem in theoretical computer science and has significant applications, e.g., in the analysis of … The approximate uniform sampling of graphs with a given degree sequence is a well-known, extensively studied problem in theoretical computer science and has significant applications, e.g., in the analysis of social networks. In this work we study an extension of the problem, where degree intervals are specified rather than a single degree sequence. We are interested in sampling and counting graphs whose degree sequences satisfy the degree interval constraints. A natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed. In this work, we provide the first fully polynomial almost uniform sampler (FPAUS) as well as the first fully polynomial randomized approximation scheme (FPRAS) for sampling and counting, respectively, graphs with near-regular degree intervals, in which every node $i$ has a degree from an interval not too far away from a given $d \in \N$. In order to design our FPAUS, we rely on various state-of-the-art tools from Markov chain theory and combinatorics. In particular, we provide the first non-trivial algorithmic application of a breakthrough result of Liebenau and Wormald (2017) regarding an asymptotic formula for the number of graphs with a given near-regular degree sequence. Furthermore, we also make use of the recent breakthrough of Anari et al. (2019) on sampling a base of a matroid under a strongly log-concave probability distribution. As a more direct approach, we also study a natural Markov chain recently introduced by Rechner, Strowick and M\"uller-Hannemann (2018), based on three simple local operations: Switches, hinge flips, and additions/deletions of a single edge. We obtain the first theoretical results for this Markov chain by showing it is rapidly mixing for the case of near-regular degree intervals of size at most one.
Submodular maximization is a classic algorithmic problem with multiple applications in data mining and machine learning; there, the growing need to deal with massive instances motivates the design of algorithms … Submodular maximization is a classic algorithmic problem with multiple applications in data mining and machine learning; there, the growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the adaptive complexity, which captures the number of sequential rounds of parallel computation needed by an algorithm to terminate. In this work we obtain the first constant factor approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with near-optimal $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: a crucial feature to account for is represented by the total number of function evaluations (or value queries). Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first combinatorial approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives.
The switch Markov chain has been extensively studied as the most natural Markov chain Monte Carlo approach for sampling graphs with prescribed degree sequences. We show that the switch chain … The switch Markov chain has been extensively studied as the most natural Markov chain Monte Carlo approach for sampling graphs with prescribed degree sequences. We show that the switch chain for sampling simple undirected graphs with a given degree sequence is rapidly mixing when the degree sequence is so‐called strongly stable. Strong stability is satisfied by all degree sequences for which the switch chain was known to be rapidly mixing based on Sinclair's multicommodity flow method up until a recent manuscript of Erdős and coworkers in 2019. Our approach relies on an embedding argument, involving a Markov chain defined by Jerrum and Sinclair in 1990. This results in a much shorter proof that unifies (almost) all the rapid mixing results for the switch chain in the literature, and extends them up to sharp characterizations of P‐stable degree sequences. In particular, our work resolves an open problem posed by Greenhill and Sfragara in 2017.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a $9$-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and … We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and envy-freeness up to any good (EFX). We establish that an MNW allocation is always EFX as long as there are at most two possible values for the goods, whereas this implication is no longer true for three or more distinct values. As a notable consequence, this proves the existence of EFX allocations for these restricted valuation functions. While the efficient computation of an MNW allocation for two possible values remains an open problem, we present a novel algorithm for directly constructing EFX allocations in this setting. Finally, we study the question of whether an MNW allocation implies any EFX guarantee for general additive valuation functions under a natural new interpretation of approximate EFX allocations.
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most … Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a (φ−1)-approximation of envy-freeness up to any good (EFX) and a 2/φ+2 -approximation of groupwise maximin share fairness (GMMS), where φ is the golden ratio. The best known approximation factor, in polynomial time, for either one of these fairness notions prior to this work was 1/2. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a 2/3-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve further the guarantees for GMMS or PMMS.Finally, we show that GMMS—and thus PMMS and EFX—allocations always exist when the number of goods does not exceed the number of agents by more than two.
We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and … We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and envy-freeness up to any good (EFX). We establish that an MNW allocation is always EFX as long as there are at most two possible values for the goods, whereas this implication is no longer true for three or more distinct values. As a notable consequence, this proves the existence of EFX allocations for these restricted valuation functions. While the efficient computation of an MNW allocation for two possible values remains an open problem, we present a novel algorithm for directly constructing EFX allocations in this setting. Finally, we study the question of whether an MNW allocation implies any EFX guarantee for general additive valuation functions under a natural new interpretation of approximate EFX allocations.
We consider the one-sided matching problem, where n agents have preferences over n items, and these preferences are induced by underlying cardinal valuation functions. The goal is to match every … We consider the one-sided matching problem, where n agents have preferences over n items, and these preferences are induced by underlying cardinal valuation functions. The goal is to match every agent to a single item so as to maximize the social welfare. Most of the related literature, however, assumes that the values of the agents are not a priori known, and only access to the ordinal preferences of the agents over the items is provided. Consequently, this incomplete information leads to loss of efficiency, which is measured by the notion of distortion. In this paper, we further assume that the agents can answer a small number of queries, allowing us partial access to their values. We study the interplay between elicited cardinal information (measured by the number of queries per agent) and distortion for one-sided matching, as well as a wide range of well-studied related problems. Qualitatively, our results show that with a limited number of queries, it is possible to obtain significant improvements over the classic setting, where only access to ordinal information is given.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
The switch Markov chain has been extensively studied as the most natural Markov Chain Monte Carlo approach for sampling graphs with prescribed degree sequences. We use comparison arguments with other---less … The switch Markov chain has been extensively studied as the most natural Markov Chain Monte Carlo approach for sampling graphs with prescribed degree sequences. We use comparison arguments with other---less natural but simpler to analyze---Markov chains, to show that the switch chain mixes rapidly in two different settings. We first study the classic problem of uniformly sampling simple undirected, as well as bipartite, graphs with a given degree sequence. We apply an embedding argument, involving a Markov chain defined by Jerrum and Sinclair (TCS, 1990) for sampling graphs that almost have a given degree sequence, to show rapid mixing for degree sequences satisfying strong stability, a notion closely related to P-stability. This results in a much shorter proof that unifies the currently known rapid mixing results of the switch chain and extends them up to sharp characterizations of P-stability. In particular, our work resolves an open problem posed by Greenhill (SODA, 2015). Secondly, in order to illustrate the power of our approach, we study the problem of uniformly sampling graphs for which, in addition to the degree sequence, a joint degree distribution is given. Although the problem was formalized over a decade ago, and despite its practical significance in generating synthetic network topologies, small progress has been made on the random sampling of such graphs. The case of a single degree class reduces to sampling of regular graphs, but beyond this almost nothing is known. We fully resolve the case of two degree classes, by showing that the switch Markov chain is always rapidly mixing. Again, we first analyze an auxiliary chain for strongly stable instances on an augmented state space and then use an embedding argument.
PREVIOUS ARTICLEOn coalescence time in graphs: When is coalescing as fast as meeting?: Extended Abstract PREVIOUS ARTICLEOn coalescence time in graphs: When is coalescing as fast as meeting?: Extended Abstract
Aggregating the preferences of individuals into a collective decision is the core subject of study of social choice theory. In 2006, Procaccia and Rosenschein considered a utilitarian social choice setting, … Aggregating the preferences of individuals into a collective decision is the core subject of study of social choice theory. In 2006, Procaccia and Rosenschein considered a utilitarian social choice setting, where the agents have explicit numerical values for the alternatives, yet they only report their linear orderings over them. To compare different aggregation mechanisms, Procaccia and Rosenschein introduced the notion of distortion, which quantifies the inefficiency of using only ordinal information when trying to maximize the social welfare, i.e., the sum of the underlying values of the agents for the chosen outcome. Since then, this research area has flourished and bounds on the distortion have been obtained for a wide variety of fundamental scenarios. However, the vast majority of the existing literature is focused on the case where nothing is known beyond the ordinal preferences of the agents over the alternatives. In this paper, we take a more expressive approach, and consider mechanisms that are allowed to further ask a few cardinal queries in order to gain partial access to the underlying values that the agents have for the alternatives. With this extra power, we design new deterministic mechanisms that achieve significantly improved distortion bounds and, in many cases, outperform the best-known randomized ordinal mechanisms. We paint an almost complete picture of the number of queries required by deterministic mechanisms to achieve specific distortion bounds.
The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of … The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible and $O(1)$-approximate mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Prior to our work, the only $O(1)$-approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). Ours is the first mechanism for the problem where---crucially---the agents are not ordered with respect to their marginal value per cost. This allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present. We obtain $O(p)$-approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a $p$-system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about non-trivial approximation guarantees in polynomial time, our results are asymptotically best possible.
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so … We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so that every agent finds her piece at least as valuable as every other agent's piece. The problem has had an interesting history so far. Although the case of three agents is solvable with less than 15 queries, for four agents no bounded procedure was known until the recent breakthroughs of Aziz and Mackenzie (STOC 2016, FOCS 2016). The main drawback of these new algorithms, however, is that they are quite complicated and with a very high query complexity. With four agents, the number of queries required is close to 600. In this work we provide an improved algorithm for four agents, which reduces the current complexity by a factor of 3.4. Our algorithm builds on the approach of Aziz and Mackenzie (STOC 2016) by incorporating new insights and simplifying several steps. Overall, this yields an easier to grasp procedure with lower complexity.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several … In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have also been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight or almost tight results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several … In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight, or almost tight, results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
We show that the switch Markov chain for sampling undirected and bipartite graphs with a given degree sequence is rapidly mixing when the degree sequence is strongly stable. Strong stability … We show that the switch Markov chain for sampling undirected and bipartite graphs with a given degree sequence is rapidly mixing when the degree sequence is strongly stable. Strong stability is more restricted than the notion of stability introduced by Jerrum and Sinclair (TCS, 1990). Interestingly enough, strong stability is satisfied by all degree sequences for which the switch chain is known to be rapidly mixing based on Sinclair's multicommodity flow method. Our approach relies on a comparison argument with a Markov chain defined by Jerrum and Sinclair for sampling graphs that almost have a given degree sequence. This comparison provides a much shorter proof for the known rapid mixing results of the switch chain and extends them up to sharp characterizations of stability in terms of certain graph parameters. In particular, our work resolves an open problem posed by Greenhill and Sfragara (TCS, 2017) regarding the switch chain for undirected graphs.
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so … We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so that every agent finds her piece at least as valuable as every other agent's piece. The problem has had an interesting history so far. Although the case of three agents is solvable with less than 15 queries, for four agents no bounded procedure was known until the recent breakthroughs of Aziz and Mackenzie (STOC 2016, FOCS 2016). The main drawback of these new algorithms, however, is that they are quite complicated and with a very high query complexity. With four agents, the number of queries required is close to 600. In this work we provide an improved algorithm for four agents, which reduces the current complexity by a factor of 3.4. Our algorithm builds on the approach of Aziz and Mackenzie (STOC 2016) by incorporating new insights and simplifying several steps. Overall, this yields an easier to grasp procedure with lower complexity.

Commonly Cited References

We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share , which is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focused on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations, a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee. Furthermore, we initiate the study of approximate maximin fair division under submodular valuations . Specifically, we show that when the valuations of the agents are nonnegative , monotone , and submodular, then a 0.21-approximate maximin fair allocation is guaranteed to exist. In fact, we show that such an allocation can be efficiently found by using a simple round-robin algorithm. A technical contribution of the article is to analyze the performance of this combinatorial algorithm by employing the concept of multilinear extensions .
The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a … We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness … We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness threshold. Specifically, maximin share is defined as the minimum utility that an agent can guarantee for herself when asked to partition the set of goods into n bundles such that the remaining (n-1) agents pick their bundles adversarially. An allocation is deemed to be fair if every agent gets a bundle whose valuation is at least her maximin share. Even though maximin shares provide a natural benchmark for fairness, it has its own drawbacks and, in particular, it is not sufficient to rule out unsatisfactory allocations. Motivated by these considerations, in this work we define a stronger notion of fairness, called groupwise maximin share guarantee (GMMS). In GMMS, we require that the maximin share guarantee is achieved not just with respect to the grand bundle, but also among all the subgroups of agents. Hence, this solution concept strengthens MMS and provides an ex-post fairness guarantee. We show that in specific settings, GMMS allocations always exist. We also establish the existence of approximate GMMS allocations under additive valuations, and develop a polynomial-time algorithm to find such allocations. Moreover, we establish a scale of fairness wherein we show that GMMS implies approximate envy freeness. Finally, we empirically demonstrate the existence of GMMS allocations in a large set of randomly generated instances. For the same set of instances, we additionally show that our algorithm achieves an approximation factor better than the established, worst-case bound.
We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid … We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular, when $f$ may be a nonmonotone function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over a downward-closed polytope $P$ described by an efficient separation oracle. Previously this was known only for monotone functions. For nonmonotone functions, a constant factor was known only when the polytope was either the intersection of a fixed number of knapsack constraints or a matroid polytope. Second, we show that contention resolution schemes are an effective way to round a fractional solution, even when $f$ is nonmonotone. In particular, contention resolution schemes for different polytopes can be combined to handle the intersection of different constraints. Via linear programming duality we show that a contention resolution scheme for a constraint is related to the correlation gap of weighted rank functions of the constraint. This leads to an optimal contention resolution scheme for the matroid polytope. Our results provide a broadly applicable framework for maximizing linear and submodular functions subject to independence constraints. We give several illustrative examples. Contention resolution schemes may find other applications.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is … We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is the best she can guarantee to herself, if she would partition the items in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then is to find an allocation, so that each player is guaranteed her maximin share. Previous works have studied this problem mostly algorithmically, providing constant factor approximation algorithms. In this work we embark on a mechanism design approach and investigate the existence of truthful mechanisms. We propose three models regarding the information that the mechanism attempts to elicit from the players, based on the cardinal and ordinal representation of preferences. We establish positive and negative (impossibility) results for each model and highlight the limitations imposed by truthfulness on the approximability of the problem. Finally, we pay particular attention to the case of two players, which already leads to challenging questions.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several … In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have also been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight or almost tight results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location, and marketing over social networks. In this paper we consider the … Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location, and marketing over social networks. In this paper we consider the problem of maximizing any submodular function subject to d knapsack constraints, where d is a fixed constant. We establish a strong relation between the discrete problem and its continuous relaxation, obtained through extension by expectation of the submodular function. Formally, we show that, for any nonnegative submodular function, an α-approximation algorithm for the continuous relaxation implies a randomized (α − ε)-approximation algorithm for the discrete problem. We use this relation to obtain an (e −1 − ε)-approximation for the problem, and a nearly optimal (1 − e −1 − ε)-approximation ratio for the monotone case, for any ε > 0. We further show that the probabilistic domain defined by a continuous solution can be reduced to yield a polynomial-size domain, given an oracle for the extension by expectation. This leads to a deterministic version of our technique.
Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets … Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is which valuation domains admit truthful budget feasible mechanisms with 'small' approximations (compared to the social optimum)? Singer [35] showed that additive and submodular functions have a constant approximation mechanism. Recently, Dobzinski, Papadimitriou, and Singer [20] gave an O(log2n) approximation mechanism for subadditive functions; further, they remarked that: A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive In this paper, we address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis, are two standard approaches from computer science and economics, respectively. - For the prior-free framework, we use a linear program (LP) that describes the fractional cover of the valuation function; the LP is also connected to the concept of approximate core in cooperative game theory. We provide a mechanism for subadditive functions whose approximation is O(I), via the worst case integrality gap I of this LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, as well as for valuations having a constant integrality gap. XOS valuations are an important class of functions and lie between the submodular and the subadditive classes of valuations. We further give another polynomial time O(log n/(log log n)) sub-logarithmic approximation mechanism for subadditive functions. Both of our mechanisms improve the best known approximation ratio O(log2 n). - For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget … Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how … We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share, that is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focussed on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee.
We study a resource allocation setting where $m$ discrete items are to be divided among $n$ agents with additive utilities, and the agents' utilities for individual items are drawn at … We study a resource allocation setting where $m$ discrete items are to be divided among $n$ agents with additive utilities, and the agents' utilities for individual items are drawn at random from a probability distribution. Since common fairness notions like envy-freeness and proportionality cannot always be satisfied in this setting, an important question is when allocations satisfying these notions exist. In this paper, we close several gaps in the line of work on asymptotic fair division. First, we prove that the classical round-robin algorithm is likely to produce an envy-free allocation provided that $m=\Omega(n\log n/\log\log n)$, matching the lower bound from prior work. We then show that a proportional allocation exists with high probability as long as $m\geq n$, while an allocation satisfying envy-freeness up to any item (EFX) is likely to be present for any relation between $m$ and $n$. Finally, we consider a related setting where each agent is assigned exactly one item and the remaining items are left unassigned, and show that the transition from non-existence to existence with respect to envy-free assignments occurs at $m=en$.
We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. … We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. An MMS allocation provides each agent a bundle worth at least her maximin share. While it is known that such an allocation need not exist [5, 7], a series of remarkable work [1-3, 6, 7] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times her maximin share. More recently, [4] showed the existence of 3/4 MMS allocations and a PTAS to find a 3/4 - ε MMS allocation. Most of the previous works utilize intricate algorithms and require agents' approximate MMS values, which are computationally expensive to obtain.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, … We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, … We consider the problem of allocating a set on indivisible items to players with private preferences in an efficient and fair way. We focus on valuations that have dichotomous marginals, in which the added value of any item to a set is either 0 or 1, and aim to design truthful allocation mechanisms (without money) that maximize welfare and are fair. For the case that players have submodular valuations with dichotomous marginals, we design such a deterministic truthful allocation mechanism. The allocation output by our mechanism is Lorenz dominating, and consequently satisfies many desired fairness properties, such as being envy-free up to any item (EFX), and maximizing the Nash Social Welfare (NSW). We then show that our mechanism with random priorities is envy-free ex-ante, while having all the above properties ex-post. Furthermore, we present several impossibility results precluding similar results for the larger class of XOS valuations. To gauge the robustness of our positive results, we also study $\epsilon$-dichotomous valuations, in which the added value of any item to a set is either non-positive, or in the range $[1, 1 + \epsilon]$. We show several impossibility results in this setting, and also a positive result: for players that have additive $\epsilon$-dichotomous valuations with sufficiently small $\epsilon$, we design a randomized truthful mechanism with strong ex-post guarantees. For $\rho = \frac{1}{1 + \epsilon}$, the allocations that it produces generate at least a $\rho$-fraction of the maximum welfare, and enjoy $\rho$-approximations for various fairness properties, such as being envy-free up to one item (EF1), and giving each player at least her maximin share.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini Sgouritsapp.2658 - 2672Chapter DOI:https://doi.org/10.1137/1.9781611975994.162PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute m goods to n agents in a “fair” manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is “envy-freeness up to any good” (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when n = 3. We show there is always a partition of the set of goods into n + 1 subsets (X1, …, Xn, P) where for i ϵ [n], Xi is the bundle allocated to agent i and the set P is unallocated (or donated to charity) such that we have: (1)envy-freeness up to any good,(2)no agent values P higher than her own bundle, and(3)fewer than n goods go to charity, i.e., |P| < n (typically m ≫ n). Our proof is constructive. When agents have additive valuations and |P| is large (i.e., when |P| is close to n), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
A randomised approximation scheme for the permanent of a 0–1s presented. The task of estimating a permanent is reduced to that of almost uniformly generating perfect matchings in a graph; … A randomised approximation scheme for the permanent of a 0–1s presented. The task of estimating a permanent is reduced to that of almost uniformly generating perfect matchings in a graph; the latter is accomplished by simulating a Markov chain whose states are the matchings in the graph. For a wide class of 0–1 matrices the approximation scheme is fully-polynomial, i.e., runs in time polynomial in the size of the matrix and a parameter that controls the accuracy of the output. This class includes all dense matrices (those that contain sufficiently many 1's) and almost all sparse matrices in some reasonable probabilistic model for 0–1 matrices of given density. For the approach sketched above to be computationally efficient, the Markov chain must be rapidly mixing: informally, it must converge in a short time to its stationary distribution. A major portion of the paper is devoted to demonstrating that the matchings chain is rapidly mixing, apparently the first such result for a Markov chain with genuinely complex structure. The techniques used seem to have general applicability, and are applied again in the paper to validate a fully-polynomial randomised approximation scheme for the partition function of an arbitrary monomer-dimer system.
In this paper we consider a simple Markov chain for bipartite graphs with given degree sequence on $n$ vertices. We show that the mixing time of this Markov chain is … In this paper we consider a simple Markov chain for bipartite graphs with given degree sequence on $n$ vertices. We show that the mixing time of this Markov chain is bounded above by a polynomial in $n$ in case of half-regular degree sequence. The novelty of our approach lies in the construction of the multicommodity flow in Sinclair's method.
The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
It is known that greedy methods perform well for maximizing monotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we … It is known that greedy methods perform well for maximizing monotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we show - arguably, surprisingly - that invoking the classical greedy algorithm $O(\sqrt{k})$-times leads to the (currently) fastest deterministic algorithm, called Repeated Greedy, for maximizing a general submodular function subject to $k$-independent system constraints. Repeated Greedy achieves $(1 + O(1/\sqrt{k}))k$ approximation using $O(nr\sqrt{k})$ function evaluations (here, $n$ and $r$ denote the size of the ground set and the maximum size of a feasible solution, respectively). We then show that by a careful sampling procedure, we can run the greedy algorithm only once and obtain the (currently) fastest randomized algorithm, called Sample Greedy, for maximizing a submodular function subject to $k$-extendible system constraints (a subclass of $k$-independent system constrains). Sample Greedy achieves $(k + 3)$-approximation with only $O(nr/k)$ function evaluations. Finally, we derive an almost matching lower bound, and show that no polynomial time algorithm can have an approximation ratio smaller than $ k + 1/2 - \varepsilon$. To further support our theoretical results, we compare the performance of Repeated Greedy and Sample Greedy with prior art in a concrete application (movie recommendation). We consistently observe that while Sample Greedy achieves practically the same utility as the best baseline, it performs at least two orders of magnitude faster.
During the last decade, the matroid secretary problem (MSP) became one of the most prominent classes of online selection problems. The interest in MSP is twofold: on the one hand, … During the last decade, the matroid secretary problem (MSP) became one of the most prominent classes of online selection problems. The interest in MSP is twofold: on the one hand, there are many interesting applications of MSP, and on the other hand, there is strong hope that MSP admits $O(1)$-competitive algorithms, which is the claim of the well-known matroid secretary conjecture. Partially linked to its numerous applications in online auctions, substantial interest arose also in the study of nonlinear versions of MSP, with a focus on the submodular MSP (SMSP). The fact that submodularity captures the property of diminishing returns, a very natural property for valuation functions, is a key reason for the interest in SMSP. So far, $O(1)$-competitive algorithms have been obtained for SMSP over some basic matroid classes. This created some hope that, analogously to the matroid secretary conjecture, one may even obtain $O(1)$-competitive algorithms for SMSP over any matroid. However, up to now, most questions related to SMSP remained open, including whether SMSP may be substantially more difficult than MSP and, more generally, to what extent MSP and, SMSP are related. Our goal is to address these points by presenting general black-box reductions from SMSP to MSP. In particular, we show that any $O(1)$-competitive algorithm for MSP, even restricted to a particular matroid class, can be transformed in a black-box way to an $O(1)$-competitive algorithm for SMSP over the same matroid class. This implies that the matroid secretary conjecture is equivalent to the same conjecture for SMSP. Hence, in this sense SMSP is not harder than MSP. Also, to find $O(1)$-competitive algorithms for SMSP over a particular matroid class, it suffices to consider MSP over the same matroid class. Using our reductions we obtain many first and improved $O(1)$-competitive algorithms for SMSP over various matroid classes by leveraging known algorithms for MSP. Moreover, our reductions imply an $O(\log\log({rank}))$-competitive algorithm for SMSP, thus, matching the currently best asymptotic algorithm for MSP, and substantially improving on the previously best $O(\log({rank}))$-competitive algorithm for SMSP.
A joint degree matrix (JDM) specifies the number of connections between nodes of given degrees in a graph, for all degree pairs and uniquely determines the degree sequence of the … A joint degree matrix (JDM) specifies the number of connections between nodes of given degrees in a graph, for all degree pairs and uniquely determines the degree sequence of the graph. We consider the space of all balanced realizations of an arbitrary JDM, realizations in which the links between any two degree groups are placed as uniformly as possible. We prove that a swap Markov Chain Monte Carlo (MCMC) algorithm in the space of all balanced realizations of an {\em arbitrary} graphical JDM mixes rapidly, i.e., the relaxation time of the chain is bounded from above by a polynomial in the number of nodes $n$. To prove fast mixing, we first prove a general factorization theorem similar to the Martin-Randall method for disjoint decompositions (partitions). This theorem can be used to bound from below the spectral gap with the help of fast mixing subchains within every partition and a bound on an auxiliary Markov chain between the partitions. Our proof of the general factorization theorem is direct and uses conductance based methods (Cheeger inequality).
Since 1997 a considerable effort has been spent on the study of the swap (switch) Markov chains on graphic degree sequences. Several results were proved on rapidly mixing Markov chains … Since 1997 a considerable effort has been spent on the study of the swap (switch) Markov chains on graphic degree sequences. Several results were proved on rapidly mixing Markov chains on regular simple, on regular directed, on half-regular directed and on half-regular bipartite degree sequences. In this paper, the main result is the following: Let $U$ and $V$ be disjoint finite sets, and let $0 < c_1 \le c_2 < |U|$ and $0 < d_1 \le d_2 < |V|$ be integers. Furthermore, assume that the bipartite degree sequence on $U \cup V$ satisfies $c_1 \le d(v) \le c_2\ : \ \forall v\in V$ and $d_1 \le d(u) \le d_2 \ : \ \forall u\in U$. Finally assume that $(c_2-c_1 -1)(d_2 -d_1 -1) < 1 + \max \{c_1(|V| -d_2), d_1(|V|- c_2) \}$. Then the swap Markov chain on this bipartite degree sequence is rapidly mixing. The technique applies on directed degree sequences as well, with very similar parameter values. These results are germane to the recent results of Greenhill and Sfragara about fast mixing MCMC processes on simple and directed degree sequences, where the maximum degrees are $O(\sqrt{\# \mbox{ of edges}})$. The results are somewhat comparable on directed degree sequences: while the GS results are better applicable for degree sequences developed under some scale-free random process, our new results are better fitted to degree sequences developed under the Erd\H{o}s -- R\'enyi model. For example our results cover all regular degree sequences, the GS model is not applicable when the average degree is $ > n/16.$
Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for … Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint.In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.
We study envy-free allocations of indivisible goods to agents in settings where each agent is unaware of the goods allocated to other agents. In particular, we propose the maximin aware … We study envy-free allocations of indivisible goods to agents in settings where each agent is unaware of the goods allocated to other agents. In particular, we propose the maximin aware (MMA) fairness measure, which guarantees that every agent, given the bundle allocated to her, is aware that she does not envy at least one other agent, even if she does not know how the other goods are distributed among other agents. We also introduce two of its relaxations, and discuss their egalitarian guarantee and existence. Finally, we present a polynomial-time algorithm, which computes an allocation that approximately satisfies MMA or its relaxations. Interestingly, the returned allocation is also 1/2-approximate EFX when all agents have sub- additive valuations, which improves the algorithm in [Plaut and Roughgarden, 2018].
The need for real time analysis of rapidly producing data streams (e.g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information … The need for real time analysis of rapidly producing data streams (e.g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information from massive data "on the fly." Such problems can often be reduced to maximizing a submodular set function subject to various constraints. While efficient streaming methods have been recently developed for monotone submodular maximization, in a wide range of applications, such as video summarization, the underlying utility function is non-monotone, and there are often various constraints imposed on the optimization problem to consider privacy or personalization. We develop the first efficient single pass streaming algorithm, Streaming Local Search, that for any streaming monotone submodular maximization algorithm with approximation guarantee α under a collection of independence systems I, provides a constant 1/(1+2/√α+1/α+2d(1+√α)) approximation guarantee for maximizing a non-monotone submodular function under the intersection of I and d knapsack constraints. Our experiments show that for video summarization, our method runs more than 1700 times faster than previous work, while maintaining practically the same performance.
We consider two problems: randomly generating labeled bipartite graphs with a given degree sequence and randomly generating labeled tournaments with a given score sequence. We analyze simple Markov chains for … We consider two problems: randomly generating labeled bipartite graphs with a given degree sequence and randomly generating labeled tournaments with a given score sequence. We analyze simple Markov chains for both problems. For the first problem, we cannot prove that our chain is rapidly mixing in general, but in the near-regular case, i.e., when all the degrees are almost equal, we give a proof of rapid mixing. Our methods also apply to the corresponding problem for general (nonbipartite) regular graphs, which was studied earlier by several researchers. One significant difference in our approach is that our chain has one state for every graph (or bipartite graph) with the given degree sequence; in particular, there are no auxiliary states as in the chain used by Jerrum and Sinclair. For the problem of generating tournaments, we are able to prove that our Markov chain on tournaments is rapidly mixing, if the score sequence is near-regular. The proof techniques we use for the two problems are similar. ©1999 John Wiley & Sons, Inc. Random Struct. Alg., 14: 293–308, 1999
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how … We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks … Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth.
This paper has two parts. In the first part we consider a simple Markov chain for d-regular graphs on n vertices, where d = d(n) may grow with n. We … This paper has two parts. In the first part we consider a simple Markov chain for d-regular graphs on n vertices, where d = d(n) may grow with n. We show that the mixing time of this Markov chain is bounded above by a polynomial in n and d. In the second part of the paper, a related Markov chain for d-regular graphs on a varying number of vertices is introduced, for even constant d. This is a model for a certain peer-to-peer network. We prove that the related chain has mixing time which is bounded above by a polynomial in N, the expected number of vertices, provided certain assumptions are met about the rate of arrival and departure of vertices.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, Nick Gravin, and Pinyan Lupp.685 - 699Chapter DOI:https://doi.org/10.1137/1.9781611973082.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group. Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
We study envy-free allocations of indivisible goods to agents in settings where each agent is unaware of the goods allocated to other agents. In particular, we propose the maximin aware … We study envy-free allocations of indivisible goods to agents in settings where each agent is unaware of the goods allocated to other agents. In particular, we propose the maximin aware (MMA) fairness measure, which guarantees that every agent, given the bundle allocated to her, is aware that she does not envy at least one other agent, even if she does not know how the other goods are distributed among other agents. We also introduce two of its relaxations and discuss their egalitarian guarantee and existence. Finally, we present a polynomial-time algorithm, which computes an allocation that approximately satisfies MMA or its relaxations. Interestingly, the returned allocation is also 1/2 -approximate EFX when all agents have subadditive valuations, which improves the algorithm in [Plaut and Roughgarden, SODA 2018].