The rapid development of advanced AI agents and the imminent deployment of many instances of these agents will give rise to multi-agent systems of unprecedented complexity. These systems pose novel …
The rapid development of advanced AI agents and the imminent deployment of many instances of these agents will give rise to multi-agent systems of unprecedented complexity. These systems pose novel and under-explored risks. In this report, we provide a structured taxonomy of these risks by identifying three key failure modes (miscoordination, conflict, and collusion) based on agents' incentives, as well as seven key risk factors (information asymmetries, network effects, selection pressures, destabilising dynamics, commitment problems, emergent agency, and multi-agent security) that can underpin them. We highlight several important instances of each risk, as well as promising directions to help mitigate them. By anchoring our analysis in a range of real-world examples and experimental evidence, we illustrate the distinct challenges posed by multi-agent systems and their implications for the safety, governance, and ethics of advanced AI.
Fairness and efficiency have become the pillars of modern fair division research, but prior work on achieving both simultaneously is largely limited to the unconstrained setting. We study fair and …
Fairness and efficiency have become the pillars of modern fair division research, but prior work on achieving both simultaneously is largely limited to the unconstrained setting. We study fair and efficient allocations of indivisible goods under additive valuations and various types of allocation feasibility constraints, and demonstrate the unreasonable effectiveness of the maximum Nash welfare (MNW) solution in this previously uncharted territory. Our main result is that MNW allocations are 1/2-envy-free up to one good (EF1) and Pareto optimal under the broad family of (arbitrary) matroid constraints. We extend these guarantees to complete MNW allocations for base-orderable matroid constraints, and to a family of non-matroidal constraints (which includes balancedness) using a novel "alternate worlds" technique. We establish tightness of our results by providing counterexamples for the satisfiability of certain stronger desiderata, but show an improved result for the special case of goods with copies (Gafni et al. 2023). Finally, we also establish novel best-of-both-worlds guarantees for goods with copies and balancedness.
We introduce a model of fair division with market values, where indivisible goods must be partitioned among agents with (additive) subjective valuations, and each good additionally has a market value. …
We introduce a model of fair division with market values, where indivisible goods must be partitioned among agents with (additive) subjective valuations, and each good additionally has a market value. The market valuation can be viewed as a separate additive valuation that holds identically across all the agents. We seek allocations that are simultaneously fair with respect to the subjective valuations and with respect to the market valuation. We show that an allocation that satisfies stochastically-dominant envy-freeness up to one good (SD-EF1) with respect to both the subjective valuations and the market valuation does not always exist, but the weaker guarantee of EF1 with respect to the subjective valuations along with SD-EF1 with respect to the market valuation can be guaranteed. We also study a number of other guarantees such as Pareto optimality, EFX, and MMS. In addition, we explore non-additive valuations and extend our model to cake-cutting. Along the way, we identify several tantalizing open questions.
We study temporal fair division, whereby a set of agents are allocated a (possibly different) set of goods on each day for a period of days. We study this setting, …
We study temporal fair division, whereby a set of agents are allocated a (possibly different) set of goods on each day for a period of days. We study this setting, as well as a number of its special cases formed by the restrictions to two agents, same goods on each day, identical preferences, or combinations thereof, and chart out the landscape of achieving two types of fairness guarantees simultaneously: fairness on each day (per day) and fairness over time (up to each day, or the weaker version, overall). In the most general setting, we prove that there always exists an allocation that is stochastically-dominant envy-free up to one good (SD-EF1) per day and proportional up to one good (PROP1) overall, and when all the agents have identical preferences, we show that SD-EF1 per day and SD-EF1 overall can be guaranteed. For the case of two agents, we prove that SD-EF1 per day and EF1 up to each day can be guaranteed using an envy balancing technique. We provide counterexamples for other combinations that establish our results as among the best guarantees possible, but also leaving open some tantalizing questions.
Large conferences such as NeurIPS and AAAI serve as crossroads of various AI fields, since they attract submissions from a vast number of communities. However, in some cases, this has …
Large conferences such as NeurIPS and AAAI serve as crossroads of various AI fields, since they attract submissions from a vast number of communities. However, in some cases, this has resulted in a poor reviewing experience for some communities, whose submissions get assigned to less qualified reviewers outside of their communities. An often-advocated solution is to break up any such large conference into smaller conferences, but this can lead to isolation of communities and harm interdisciplinary research. We tackle this challenge by introducing a notion of group fairness, called the core, which requires that every possible community (subset of researchers) to be treated in a way that prevents them from unilaterally benefiting by withdrawing from a large conference. We study a simple peer review model, prove that it always admits a reviewing assignment in the core, and design an efficient algorithm to find one such assignment. We use real data from CVPR and ICLR conferences to compare our algorithm to existing reviewing assignment algorithms on a number of metrics.
Envy-freeness has become the cornerstone of fair division research. In settings where each individual is allocated a disjoint share of collective resources, it is a compelling fairness axiom which demands …
Envy-freeness has become the cornerstone of fair division research. In settings where each individual is allocated a disjoint share of collective resources, it is a compelling fairness axiom which demands that no individual strictly prefer the allocation of another individual to their own. Unfortunately, in many real-life collective decision-making problems, the goal is to choose a (common) public outcome that is equally applicable to all individuals, and the notion of envy becomes vacuous. Consequently, this literature has avoided studying fairness criteria that focus on individuals feeling a sense of jealousy or resentment towards other individuals (rather than towards the system), missing out on a key aspect of fairness. In this work, we propose a novel fairness criterion, individual harm ratio, which is inspired by envy-freeness but applies to a broad range of collective decision-making settings. Theoretically, we identify minimal conditions under which this criterion and its groupwise extensions can be guaranteed, and study the computational complexity of related problems. Empirically, we conduct experiments with real data to show that our fairness criterion is powerful enough to differentiate between prominent decision-making algorithms for a range of tasks from voting and fair division to participatory budgeting and peer review.
A voting rule decides on a probability distribution over a set of m alternatives, based on rankings of those alternatives provided by agents. We assume that agents have cardinal utility …
A voting rule decides on a probability distribution over a set of m alternatives, based on rankings of those alternatives provided by agents. We assume that agents have cardinal utility functions over the alternatives, but voting rules have access to only the rankings induced by these utilities. We evaluate how well voting rules do on measures of social welfare and of proportional fairness, computed based on the hidden utility functions. In particular, we study the distortion of voting rules, which is a worst-case measure. It is an approximation ratio comparing the utilitarian social welfare of the optimum outcome to the social welfare produced by the outcome selected by the voting rule, in the worst case over possible input profiles and utility functions that are consistent with the input. The previous literature has studied distortion with unit-sum utility functions (which are normalized to sum to 1), and left a small asymptotic gap in the best possible distortion. Using tools from the theory of fair multi-winner elections, we propose the first voting rule which achieves the optimal distortion \(\Theta (\sqrt {m})\) for unit-sum utilities. Our voting rule also achieves optimum \(\Theta (\sqrt {m})\) distortion for a larger class of utilities, including unit-range and approval (0/1) utilities. We then take a similar worst-case approach to a quantitative measure of the fairness of a voting rule, called proportional fairness . Informally, it measures whether the influence of cohesive groups of agents on the voting outcome is proportional to the group size. We show that there is a voting rule which, without knowledge of the utilities, can achieve a Θ (log m )-approximation to proportional fairness. As a consequence of its proportional fairness, we show that this voting rule achieves Θ (log m ) distortion with respect to the Nash welfare, and selects a distribution that provides a Θ (log m )-approximation to the core, making it interesting for applications in participatory budgeting. For all three approximations, we show that Θ (log m ) is the best possible approximation.
We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. …
We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm's performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L)T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.
We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of n alternatives and output a single alternative, aiming …
We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of n alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule's performance and the best performance achievable given the cardinal preferences.
We initiate the study of fairness among classes of agents in online bipartite matching where there is a given set of offline vertices (aka agents) and another set of vertices …
We initiate the study of fairness among classes of agents in online bipartite matching where there is a given set of offline vertices (aka agents) and another set of vertices (aka items) that arrive online and must be matched irrevocably upon arrival. In this setting, agents are partitioned into a set of classes and the matching is required to be fair with respect to the classes. We adopt popular fairness notions (e.g. envy-freeness, proportionality, and maximin share) and their relaxations to this setting and study deterministic and randomized algorithms for matching indivisible items (leading to integral matchings) and for matching divisible items (leading to fractional matchings). For matching indivisible items, we propose an adaptive-priority-based algorithm, MATCH-AND-SHIFT, prove that it achieves (1/2)-approximation of both class envy-freeness up to one item and class maximin share fairness, and show that each guarantee is tight. For matching divisible items, we design a water-filling-based algorithm, EQUAL-FILLING, that achieves (1-1/e)-approximation of class envy-freeness and class proportionality; we prove (1-1/e) to be tight for class proportionality and establish a 3/4 upper bound on class envy-freeness.
Consider the problem of allocating indivisible goods among agents with additive valuations, where monetary payments are not allowed. When randomization is allowed, it is possible to achieve compelling notions of …
Consider the problem of allocating indivisible goods among agents with additive valuations, where monetary payments are not allowed. When randomization is allowed, it is possible to achieve compelling notions of fairness such as EV, which states that no agent should prefer any other agent's allocation to their own. When allocations must be deterministic, achieving exact fairness is impossible but approximate notions such as EV up to one good can be guaranteed. In “Best of Both Worlds: Ex Ante and Ex Post Fairness in Resource Allocation,” H. Aziz, R. Freeman, N. Shah, and R. Vaish ask whether it is possible to achieve both types of guarantees simultaneously. More specifically, they ask whether there exists a probability distribution over deterministic allocations such that every deterministic allocation is envy-free up to one good and the distribution is exactly envy-free in expectation. The main result of the paper answers this question in the affirmative, showing that ex ante and ex post fairness need not be in conflict.
We study the problem of designing voting rules that take as input the ordinal preferences of $n$ agents over a set of $m$ alternatives and output a single alternative, aiming …
We study the problem of designing voting rules that take as input the ordinal preferences of $n$ agents over a set of $m$ alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent's ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule's performance and the best performance achievable given the cardinal preferences. The work on the distortion of voting rules has been largely divided into two worlds: utilitarian distortion and metric distortion. In the former, the cardinal preferences of the agents correspond to general utilities and the goal is to maximize a normalized social welfare. In the latter, the agents' cardinal preferences correspond to costs given by distances in an underlying metric space and the goal is to minimize the (unnormalized) social cost. Several deterministic and randomized voting rules have been proposed and evaluated for each of these worlds separately, gradually improving the achievable distortion bounds, but none of the known voting rules perform well in both worlds simultaneously. In this work, we prove that one can achieve the best of both worlds by designing new voting rules, that simultaneously achieve near-optimal distortion guarantees in both distortion worlds. We also prove that this positive result does not generalize to the case where the voting rule is provided with the rankings of only the top-$t$ alternatives of each agent, for $t<m$.
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, n agents and m alternatives are located in an underlying metric space. The exact distances …
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, n agents and m alternatives are located in an underlying metric space. The exact distances between agents and alternatives are unknown. Instead, each agent provides a ranking of the alternatives, ordered from the closest to the farthest. Typically, the goal is to select a single alternative that approximately minimizes the total distance from the agents, and the worst-case approximation ratio is termed distortion. In the case of multiwinner voting, the goal is to select a committee of k alternatives that (approximately) minimizes the total cost to all agents. We consider the scenario where the cost of an agent for a committee is her distance from the q-th closest alternative in the committee. We reveal a surprising trichotomy on the distortion of multiwinner voting rules in terms of k and q: The distortion is unbounded when q⩽k/3, asymptotically linear in the number of agents when k/3<q⩽k/2, and constant when q>k/2.
A voting rule decides on a probability distribution over a set of m alternatives, based on rankings of those alternatives provided by agents. We assume that agents have cardinal utility …
A voting rule decides on a probability distribution over a set of m alternatives, based on rankings of those alternatives provided by agents. We assume that agents have cardinal utility functions over the alternatives, but voting rules have access to only the rankings induced by these utilities. We evaluate how well voting rules do on measures of social welfare and of proportional fairness, computed based on the hidden utility functions.
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, n agents and m alternatives are located in an underlying metric space. The exact distances …
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, n agents and m alternatives are located in an underlying metric space. The exact distances between agents and alternatives are unknown. Instead, each agent provides a ranking of the alternatives, ordered from the closest to the farthest. Typically, the goal is to select a single alternative that approximately minimizes the total distance from the agents, and the worst-case approximation ratio is termed distortion. In the case of multiwinner voting, the goal is to select a committee of k alternatives that (approximately) minimizes the total cost to all agents. We consider the scenario where the cost of an agent for a committee is her distance from the q-th closest alternative in the committee. We reveal a surprising trichotomy on the distortion of multiwinner voting rules in terms of k and q: The distortion is unbounded when q
In the classical version of online bipartite matching, there is a given set of offline vertices (aka agents) and another set of vertices (aka items) that arrive online. When each …
In the classical version of online bipartite matching, there is a given set of offline vertices (aka agents) and another set of vertices (aka items) that arrive online. When each item arrives, its incident edges -- the agents who like the item -- are revealed and the algorithm must irrevocably match the item to such agents. We initiate the study of class fairness in this setting, where agents are partitioned into a set of classes and the matching is required to be fair with respect to the classes. We adopt popular fairness notions from the fair division literature such as envy-freeness (up to one item), proportionality, and maximin share fairness to our setting. Our class versions of these notions demand that all classes, regardless of their sizes, receive a fair treatment. We study deterministic and randomized algorithms for matching indivisible items (leading to integral matchings) and for matching divisible items (leading to fractional matchings). We design and analyze three novel algorithms. For matching indivisible items, we propose an adaptive-priority-based algorithm, MATCH-AND-SHIFT, prove that it achieves 1/2-approximation of both class envy-freeness up to one item and class maximin share fairness, and show that each guarantee is tight. For matching divisible items, we design a water-filling-based algorithm, EQUAL-FILLING, that achieves (1-1/e)-approximation of class envy-freeness and class proportionality; we prove (1-1/e) to be tight for class proportionality and establish a 3/4 upper bound on class envy-freeness. Finally, we build upon EQUAL-FILLING to design a randomized algorithm for matching indivisible items, EQAUL-FILLING-OCS, which achieves 0.593-approximation of class proportionality. The algorithm and its analysis crucially leverage the recently introduced technique of online correlated selection (OCS) [Fahrbach et al., 2020].
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, $n$ agents and $m$ alternatives are located in an underlying metric space. The exact distances …
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, $n$ agents and $m$ alternatives are located in an underlying metric space. The exact distances between agents and alternatives are unknown. Instead, each agent provides a ranking of the alternatives, ordered from the closest to the farthest. Typically, the goal is to select a single alternative that approximately minimizes the total distance from the agents, and the worst-case approximation ratio is termed distortion. In the case of multiwinner voting, the goal is to select a committee of $k$ alternatives that (approximately) minimizes the total cost to all agents. We consider the scenario where the cost of an agent for a committee is her distance from the $q$-th closest alternative in the committee. We reveal a surprising trichotomy on the distortion of multiwinner voting rules in terms of $k$ and $q$: The distortion is unbounded when $q \leq k/3$, asymptotically linear in the number of agents when $k/3 < q \leq k/2$, and constant when $q > k/2$.
Development of perceptual image quality assessment (IQA) metrics has been of significant interest to computer vision community. The aim of these metrics is to model quality of an image as …
Development of perceptual image quality assessment (IQA) metrics has been of significant interest to computer vision community. The aim of these metrics is to model quality of an image as perceived by humans. Recent works in Full-reference IQA research perform pixelwise comparison between deep features corresponding to query and reference images for quality prediction. However, pixelwise feature comparison may not be meaningful if distortion present in query image is severe. In this context, we explore utilization of no-reference features in Full-reference IQA task. Our model consists of both full-reference and no-reference branches. Full-reference branches use both distorted and reference images, whereas No-reference branch only uses distorted image. Our experiments show that use of no-reference features boosts performance of image quality assessment. Our model achieves higher SRCC and KRCC scores than a number of state-of-the-art algorithms on KADID-10K and PIPAL datasets.
We design online algorithms for the fair allocation of public goods to a set of $N$ agents over a sequence of $T$ rounds and focus on improving their performance using …
We design online algorithms for the fair allocation of public goods to a set of $N$ agents over a sequence of $T$ rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, the algorithm learns every agent's value for the good, and must irrevocably decide the amount of investment in the good without exceeding a total budget of $B$ across all rounds. The algorithm can utilize (potentially inaccurate) predictions of each agent's total value for all the goods to arrive. We measure the performance of the algorithm using a proportional fairness objective, which informally demands that every group of agents be rewarded in proportion to its size and the cohesiveness of its preferences. In the special case of binary agent preferences and a unit budget, we show that $O(\log N)$ proportional fairness can be achieved without using any predictions, and that this is optimal even if perfectly accurate predictions were available. However, for general preferences and budget no algorithm can achieve better than $\Theta(T/B)$ proportional fairness without predictions. We show that algorithms with (reasonably accurate) predictions can do much better, achieving $\Theta(\log (T/B))$ proportional fairness. We also extend this result to a general model in which a batch of $L$ public goods arrive in each round and achieve $O(\log (\min(N,L) \cdot T/B))$ proportional fairness. Our exact bounds are parametrized as a function of the error in the predictions and the performance degrades gracefully with increasing errors.
The notion of distortion in social choice problems has been defined to measure the loss in efficiency---typically measured by the utilitarian social welfare, the sum of utilities of the participating …
The notion of distortion in social choice problems has been defined to measure the loss in efficiency---typically measured by the utilitarian social welfare, the sum of utilities of the participating agents---due to having access only to limited information about the preferences of the agents. We survey the most significant results of the literature on distortion from the past 15 years, and highlight important open problems and the most promising avenues of ongoing and future work.
The wisdom of the crowd has long become the de facto approach for eliciting information from individuals or experts in order to predict the ground truth. However, classical democratic approaches …
The wisdom of the crowd has long become the de facto approach for eliciting information from individuals or experts in order to predict the ground truth. However, classical democratic approaches for aggregating individual \emph{votes} only work when the opinion of the majority of the crowd is relatively accurate. A clever recent approach, \emph{surprisingly popular voting}, elicits additional information from the individuals, namely their \emph{prediction} of other individuals' votes, and provably recovers the ground truth even when experts are in minority. This approach works well when the goal is to pick the correct option from a small list, but when the goal is to recover a true ranking of the alternatives, a direct application of the approach requires eliciting too much information. We explore practical techniques for extending the surprisingly popular algorithm to ranked voting by partial votes and predictions and designing robust aggregation rules. We experimentally demonstrate that even a little prediction information helps surprisingly popular voting outperform classical approaches.
We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature such as envy-freeness up to one good and maximin …
We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature such as envy-freeness up to one good and maximin share guarantee. In our model, each agent is matched to multiple agents on the other side over whom she has additive preferences. We demand fairness for each side separately, giving rise to notions such as double envy-freeness up to one match (DEF1) and double maximin share guarantee (DMMS). We show that (a slight strengthening of) DEF1 cannot always be achieved, but in the special case where both sides have identical preferences, the round-robin algorithm with a carefully designed agent ordering achieves it. In contrast, DMMS cannot be achieved even when both sides have identical preferences.
We study the fundamental problem of allocating indivisible goods to agents with additive preferences. We consider eliciting from each agent only a ranking of her k most preferred goods instead …
We study the fundamental problem of allocating indivisible goods to agents with additive preferences. We consider eliciting from each agent only a ranking of her k most preferred goods instead of her full cardinal valuations. We characterize the amount of preference information that must be elicited in order to satisfy envy-freeness up to one good and approximate maximin share guarantee, two widely studied fairness notions. We also analyze the multiplicative loss in social welfare incurred due to the lack of full information with and without fairness requirements.
We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature such as envy-freeness up to one good and maximin …
We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature such as envy-freeness up to one good and maximin share guarantee. In our model, each agent is matched to multiple agents on the other side over whom she has additive preferences. We demand fairness for each side separately, giving rise to notions such as double envy-freeness up to one match (DEF1) and double maximin share guarantee (DMMS). We show that (a slight strengthening of) DEF1 cannot always be achieved, but in the special case where both sides have identical preferences, the round-robin algorithm with a carefully designed agent ordering achieves it. In contrast, DMMS cannot be achieved even when both sides have identical preferences.
We study the classical problem of matching n agents to n objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching …
We study the classical problem of matching n agents to n objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching literature: Pareto optimality and rank-maximality. Instead of asking the agents to report their complete preferences, our goal is to learn a desirable matching from partial preferences, specifically a matching that is necessarily Pareto optimal (NPO) or necessarily rank-maximal (NRM) under any completion of the partial preferences. We focus on the top-k model in which agents reveal a prefix of their preference rankings. We design efficient algorithms to check if a given matching is NPO or NRM, and to check whether such a matching exists given top-k partial preferences. We also study online algorithms for eliciting partial preferences adaptively, and prove bounds on their competitive ratio.
The notion of distortion in social choice problems has been defined to measure the loss in efficiency -- typically measured by the utilitarian social welfare, the sum of utilities of …
The notion of distortion in social choice problems has been defined to measure the loss in efficiency -- typically measured by the utilitarian social welfare, the sum of utilities of the participating agents -- due to having access only to limited information about the preferences of the agents. We survey the most significant results of the literature on distortion from the past 15 years, and highlight important open problems and the most promising avenues of ongoing and future work.
Custom and natural lighting conditions can be emulated in images of the scene during post-editing. Extraordinary capabilities of the deep learning framework can be utilized for such purpose. Deep image …
Custom and natural lighting conditions can be emulated in images of the scene during post-editing. Extraordinary capabilities of the deep learning framework can be utilized for such purpose. Deep image relighting allows automatic photo enhancement by illumination-specific retouching. Most of the state-of-the-art methods for relighting are run-time intensive and memory inefficient. In this paper, we propose an efficient, real-time framework Deep Stacked Relighting Network (DSRN) for image relighting by utilizing the aggregated features from input image at different scales. Our model is very lightweight with total size of about 42 MB and has an average inference time of about 0.0116s for image of resolution $1024 \times 1024$ which is faster as compared to other multi-scale models. Our solution is quite robust for translating image color temperature from input image to target image and also performs moderately for light gradient generation with respect to the target image. Additionally, we show that if images illuminated from opposite directions are used as input, the qualitative results improve over using a single input image.
The wisdom of the crowd has long become the de facto approach for eliciting information from individuals or experts in order to predict the ground truth. However, classical democratic approaches …
The wisdom of the crowd has long become the de facto approach for eliciting information from individuals or experts in order to predict the ground truth. However, classical democratic approaches for aggregating individual \emph{votes} only work when the opinion of the majority of the crowd is relatively accurate. A clever recent approach, \emph{surprisingly popular voting}, elicits additional information from the individuals, namely their \emph{prediction} of other individuals' votes, and provably recovers the ground truth even when experts are in minority. This approach works well when the goal is to pick the correct option from a small list, but when the goal is to recover a true ranking of the alternatives, a direct application of the approach requires eliciting too much information. We explore practical techniques for extending the surprisingly popular algorithm to ranked voting by partial votes and predictions and designing robust aggregation rules. We experimentally demonstrate that even a little prediction information helps surprisingly popular voting outperform classical approaches.
A major open question in fair allocation of indivisible items is whether there always exists an allocation of chores that is Pareto optimal (PO) and envy-free up to one item …
A major open question in fair allocation of indivisible items is whether there always exists an allocation of chores that is Pareto optimal (PO) and envy-free up to one item (EF1). We answer this question affirmatively for the natural class of bivalued utilities, where each agent partitions the chores into easy and difficult ones, and has cost $p > 1$ for chores that are difficult for her and cost $1$ for chores that are easy for her. Such an allocation can be found in polynomial time using an algorithm based on the Fisher market. We also show that for a slightly broader class of utilities, where each agent $i$ can have a potentially different integer $p_i$, an allocation that is maximin share fair (MMS) always exists and can be computed in polynomial time, provided that each $p_i$ is an integer. Our MMS arguments also hold when allocating goods instead of chores, and extend to another natural class of utilities, namely weakly lexicographic utilities.
We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature such as envy-freeness up to one good and maximin …
We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature such as envy-freeness up to one good and maximin share guarantee. In our model, each agent is matched to multiple agents on the other side over whom she has additive preferences. We demand fairness for each side separately, giving rise to notions such as double envy-freeness up to one match (DEF1) and double maximin share guarantee (DMMS). We show that (a slight strengthening of) DEF1 cannot always be achieved, but in the special case where both sides have identical preferences, the round-robin algorithm with a carefully designed agent ordering achieves it. In contrast, DMMS cannot be achieved even when both sides have identical preferences.
We study the fundamental problem of allocating indivisible goods to agents with additive preferences. We consider eliciting from each agent only a ranking of her $k$ most preferred goods instead …
We study the fundamental problem of allocating indivisible goods to agents with additive preferences. We consider eliciting from each agent only a ranking of her $k$ most preferred goods instead of her full cardinal valuations. We characterize the value of $k$ needed to achieve envy-freeness up to one good and approximate maximin share guarantee, two widely studied fairness notions. We also analyze the multiplicative loss in social welfare incurred due to the lack of full information with and without the fairness requirements.
The notion of distortion in social choice problems has been defined to measure the loss in efficiency -- typically measured by the utilitarian social welfare, the sum of utilities of …
The notion of distortion in social choice problems has been defined to measure the loss in efficiency -- typically measured by the utilitarian social welfare, the sum of utilities of the participating agents -- due to having access only to limited information about the preferences of the agents. We survey the most significant results of the literature on distortion from the past 15 years, and highlight important open problems and the most promising avenues of ongoing and future work.
We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose …
We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose a point in C whose total distance from the points in V is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in V, a ranking of its distances to the points in C. We propose algorithms that choose a point in C using only these rankings as input and we provide bounds on their distortion (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where V represents a set of voters, C represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion 3. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion 3, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the ranking-matching lemma. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion 3. We also provide more refined, parameterized, bounds using the notion of decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
We study the classic problem of matching $n$ agents to $n$ objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching …
We study the classic problem of matching $n$ agents to $n$ objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching literature: Pareto optimality and rank-maximality. Instead of asking the agents to report their complete preferences, our goal is to learn a desirable matching from partial preferences, specifically a matching that is necessarily Pareto optimal (NPO) or necessarily rank-maximal (NRM) under any completion of the partial preferences. We focus on the top-$k$ model in which agents reveal a prefix of their preference rankings. We design efficient algorithms to check if a given matching is NPO or NRM, and to check whether such a matching exists given top-$k$ partial preferences. We also study online algorithms to elicit partial preferences adaptively, and prove bounds on their competitive ratio.
We study the classical problem of matching $n$ agents to $n$ objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching …
We study the classical problem of matching $n$ agents to $n$ objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching literature: Pareto optimality and rank-maximality. Instead of asking the agents to report their complete preferences, our goal is to learn a desirable matching from partial preferences, specifically a matching that is necessarily Pareto optimal (NPO) or necessarily rank-maximal (NRM) under any completion of the partial preferences. We focus on the top-$k$ model in which agents reveal a prefix of their preference rankings. We design efficient algorithms to check if a given matching is NPO or NRM, and to check whether such a matching exists given top-$k$ partial preferences. We also study online algorithms for eliciting partial preferences adaptively, and prove bounds on their competitive ratio.
We build on an emerging line of work which studies strategic manipulations in training data provided to machine learning algorithms. Specifically, we focus on the ubiquitous task of linear regression. …
We build on an emerging line of work which studies strategic manipulations in training data provided to machine learning algorithms. Specifically, we focus on the ubiquitous task of linear regression. Prior work focused on the design of strategyproof algorithms, which aim to prevent such manipulations altogether by aligning the incentives of data sources. However, algorithms used in practice are often not strategyproof, which induces a strategic game among the agents. We focus on a broad class of non-strategyproof algorithms for linear regression, namely $\ell_p$ norm minimization ($p > 1$) with convex regularization. We show that when manipulations are bounded, every algorithm in this class admits a unique pure Nash equilibrium outcome. We also shed light on the structure of this equilibrium by uncovering a surprising connection between strategyproof algorithms and pure Nash equilibria of non-strategyproof algorithms in a broader setting, which may be of independent interest. Finally, we analyze the quality of equilibria under these algorithms in terms of the price of anarchy.
In the allocation of resources to a set of agents, how do fairness guarantees impact the social welfare? A quantitative measure of this impact is the price of fairness, which …
In the allocation of resources to a set of agents, how do fairness guarantees impact the social welfare? A quantitative measure of this impact is the price of fairness, which measures the worst-case loss of social welfare due to fairness constraints. While initially studied for divisible goods, recent work on the price of fairness also studies the setting of indivisible goods.
In this paper, we resolve the price of two well-studied fairness notions for the allocation of indivisible goods: envy-freeness up to one good (EF1), and approximate maximin share (MMS). For both EF1 and 1/2-MMS guarantees, we show, via different techniques, that the price of fairness is $O(\sqrt{n})$, where $n$ is the number of agents. From previous work, it follows that our bounds are tight. Our bounds are obtained via efficient algorithms. For 1/2-MMS, our bound holds for additive valuations, whereas for EF1, our bound holds for the more general class of subadditive valuations. This resolves an open problem posed by Bei et al. (2019).
In the allocation of resources to a set of agents, how do fairness guarantees impact the social welfare? A quantitative measure of this impact is the price of fairness, which …
In the allocation of resources to a set of agents, how do fairness guarantees impact the social welfare? A quantitative measure of this impact is the price of fairness, which measures the worst-case loss of social welfare due to fairness constraints. While initially studied for divisible goods, recent work on the price of fairness also studies the setting of indivisible goods.
In this paper, we resolve the price of two well-studied fairness notions for the allocation of indivisible goods: envy-freeness up to one good (EF1), and approximate maximin share (MMS). For both EF1 and 1/2-MMS guarantees, we show, via different techniques, that the price of fairness is $O(\sqrt{n})$, where $n$ is the number of agents. From previous work, it follows that our bounds are tight. Our bounds are obtained via efficient algorithms. For 1/2-MMS, our bound holds for additive valuations, whereas for EF1, our bound holds for the more general class of subadditive valuations. This resolves an open problem posed by Bei et al. (2019).
We study the problem of allocating indivisible goods among agents. When randomization is allowed, it is possible to achieve compelling fairness guarantees such as envy-freeness (Foley, 1967), which states that …
We study the problem of allocating indivisible goods among agents. When randomization is allowed, it is possible to achieve compelling fairness guarantees such as envy-freeness (Foley, 1967), which states that no agent should (in expectation) prefer any other agent's allocation to her own. For instance, we can simply allocate each good, independently of the other goods, to an agent chosen uniformly at random. However, while this scheme is fair ex-ante, it may produce outcomes that are very unfair ex-post, such as by chance assigning all the goods to a single agent. On the other hand, in the absence of randomization, some amount of ex-post unfairness is unavoidable. Nevertheless, it is possible to guarantee relaxations of envy-freeness that bound the maximum level of envy. One popular relaxation is envy-freeness up to one good (Lipton et al., 2004; Budish, 2011), which requires that the envy of any agent toward another agent can be removed by the elimination of at most one good from the envied agent's bundle.
We study the following metric distortion problem: there are two finite sets of points, $V$ and $C$, that lie in the same metric space, and our goal is to choose …
We study the following metric distortion problem: there are two finite sets of points, $V$ and $C$, that lie in the same metric space, and our goal is to choose a point in $C$ whose total distance from the points in $V$ is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in $V$, a ranking of its distances to the points in $C$. We propose algorithms that choose a point in $C$ using only these rankings as input and we provide bounds on their \emph{distortion} (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where $V$ represents a set of voters, $C$ represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion $3$. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion $3$, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the \emph{ranking-matching lemma}. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion $3$. We also provide more refined, parameterized, bounds using the notion of $\alpha$-decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
We propose a multi-agent variant of the classical multi-armed bandit problem, in which there are $N$ agents and $K$ arms, and pulling an arm generates a (possibly different) stochastic reward …
We propose a multi-agent variant of the classical multi-armed bandit problem, in which there are $N$ agents and $K$ arms, and pulling an arm generates a (possibly different) stochastic reward for each agent. Unlike the classical multi-armed bandit problem, the goal is not to learn the "best arm"; indeed, each agent may perceive a different arm to be the best for her personally. Instead, we seek to learn a fair distribution over the arms. Drawing on a long line of research in economics and computer science, we use the Nash social welfare as our notion of fairness. We design multi-agent variants of three classic multi-armed bandit algorithms and show that they achieve sublinear regret, which is now measured in terms of the lost Nash social welfare.
We build on an emerging line of work which studies strategic manipulations in training data provided to machine learning algorithms. Specifically, we focus on the ubiquitous task of linear regression. …
We build on an emerging line of work which studies strategic manipulations in training data provided to machine learning algorithms. Specifically, we focus on the ubiquitous task of linear regression. Prior work focused on the design of strategyproof algorithms, which aim to prevent such manipulations altogether by aligning the incentives of data sources. However, algorithms used in practice are often not strategyproof, which induces a strategic game among the agents. We focus on a broad class of non-strategyproof algorithms for linear regression, namely $\ell_p$ norm minimization ($p > 1$) with convex regularization. We show that when manipulations are bounded, every algorithm in this class admits a unique pure Nash equilibrium outcome. We also shed light on the structure of this equilibrium by uncovering a surprising connection between strategyproof algorithms and pure Nash equilibria of non-strategyproof algorithms in a broader setting, which may be of independent interest. Finally, we analyze the quality of equilibria under these algorithms in terms of the price of anarchy.
In the allocation of resources to a set of agents, how do fairness guarantees impact the social welfare? A quantitative measure of this impact is the price of fairness, which …
In the allocation of resources to a set of agents, how do fairness guarantees impact the social welfare? A quantitative measure of this impact is the price of fairness, which measures the worst-case loss of social welfare due to fairness constraints. While initially studied for divisible goods, recent work on the price of fairness also studies the setting of indivisible goods. In this paper, we resolve the price of two well-studied fairness notions for the allocation of indivisible goods: envy-freeness up to one good (EF1), and approximate maximin share (MMS). For both EF1 and 1/2-MMS guarantees, we show, via different techniques, that the price of fairness is $O(\sqrt{n})$, where $n$ is the number of agents. From previous work, it follows that our bounds are tight. Our bounds are obtained via efficient algorithms. For 1/2-MMS, our bound holds for additive valuations, whereas for EF1, our bound holds for the more general class of subadditive valuations. This resolves an open problem posed by Bei et al. (2019).
We study fair allocation of indivisible goods among agents. Prior research focuses on additive agent preferences, which leads to an impossibility when seeking truthfulness, fairness, and efficiency. We show that …
We study fair allocation of indivisible goods among agents. Prior research focuses on additive agent preferences, which leads to an impossibility when seeking truthfulness, fairness, and efficiency. We show that when agents have binary additive preferences, a compelling rule -- maximum Nash welfare (MNW) -- provides all three guarantees. Specifically, we show that deterministic MNW with lexicographic tie-breaking is group strategyproof in addition to being envy-free up to one good and Pareto optimal. We also prove that fractional MNW -- known to be group strategyproof, envy-free, and Pareto optimal -- can be implemented as a distribution over deterministic MNW allocations, which are envy-free up to one good. Our work establishes maximum Nash welfare as the ultimate allocation rule in the realm of binary additive preferences.
We study the problem of allocating indivisible goods among agents with additive valuations. When randomization is allowed, it is possible to achieve compelling notions of fairness such as envy-freeness, which …
We study the problem of allocating indivisible goods among agents with additive valuations. When randomization is allowed, it is possible to achieve compelling notions of fairness such as envy-freeness, which states that no agent should prefer any other agent's allocation to her own. When allocations must be deterministic, achieving exact fairness is impossible but approximate notions such as envy-freeness up to one good can be guaranteed. Our goal in this work is to achieve both simultaneously, by constructing a randomized allocation that is exactly fair ex-ante and approximately fair ex-post. The key question we address is whether ex-ante envy-freeness can be achieved in combination with ex-post envy-freeness up to one good. We settle this positively by designing an efficient algorithm that achieves both properties simultaneously. If we additionally require economic efficiency, we obtain an impossibility result. However, we show that economic efficiency and ex-ante envy-freeness can be simultaneously achieved if we slightly relax our ex-post fairness guarantee. On our way, we characterize the well-known Maximum Nash Welfare allocation rule in terms of a recently introduced fairness guarantee that applies to groups of agents, not just individuals.
We study the following metric distortion problem: there are two finite sets of points, $V$ and $C$, that lie in the same metric space, and our goal is to choose …
We study the following metric distortion problem: there are two finite sets of points, $V$ and $C$, that lie in the same metric space, and our goal is to choose a point in $C$ whose total distance from the points in $V$ is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in $V$, a ranking of its distances to the points in $C$. We propose algorithms that choose a point in $C$ using only these rankings as input and we provide bounds on their \emph{distortion} (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where $V$ represents a set of voters, $C$ represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion $3$. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion $3$, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the \emph{ranking-matching lemma}. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion $3$. We also provide more refined, parameterized, bounds using the notion of $\alpha$-decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
In distortion-based analysis of social choice rules over metric spaces, voters and candidates are jointly embedded in a metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only …
In distortion-based analysis of social choice rules over metric spaces, voters and candidates are jointly embedded in a metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only this ordinal (comparison) information, must select a candidate approximately minimizing the sum of distances from all voters to the chosen candidate. It is known that while the Copeland rule and related rules guarantee distortion at most 5, the distortion of many other standard voting rules, such as Plurality, Veto, or k-approval, grows unboundedly in the number n of candidates.An advantage of Plurality, Veto, or k-approval with small k is that they require less communication from the voters; all deterministic social choice rules known to achieve constant distortion require voters to transmit their complete rankings of all candidates. This motivates our study of the tradeoff between the distortion and the amount of communication in deterministic social choice rules.We show that any one-round deterministic voting mechanism in which each voter communicates only the candidates she ranks in a given set of k positions must have distortion at least 2n-k/k; we give a mechanism achieving an upper bound of O(n/k), which matches the lower bound up to a constant. For more general communication-bounded voting mechanisms, in which each voter communicates b bits of information about her ranking, we show a slightly weaker lower bound of Ω(n/b) on the distortion.For randomized mechanisms, Random Dictatorship achieves expected distortion strictly smaller than 3, almost matching a lower bound of 3 − 2/n for any randomized mechanism that only receives each voter's top choice. We close this gap, by giving a simple randomized social choice rule which only uses each voter's first choice, and achieves expected distortion 3 − 2/n.
We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs …
We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs form a metric. We assume that these costs are unknown to the algorithms (and possibly even to the agents themselves), which means we cannot simply select the optimal alternative, i.e. the alternative that minimizes the total agent cost (or median agent cost). However, we do assume that the agents know their ordinal preferences that are induced by the metric space. We examine randomized social choice functions that require only this ordinal information and select an alternative that is good in expectation with respect to the costs from the metric. To quantify how good a randomized social choice function is, we bound the distortion, which is the worst-case ratio between the expected cost of the alternative selected and the cost of the optimal alternative. We provide new distortion bounds for a variety of randomized algorithms, for both general metrics and for important special cases. Our results show a sizable improvement in distortion over deterministic algorithms.
We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume …
We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume that agents have additive utilities across elements. Our model generalizes existing frameworks such as fair public decision making and participatory budgeting. We study a groupwise fairness notion called the core, which generalizes well-studied notions of proportionality and Pareto efficiency, and requires that each subset of agents must receive an outcome that is fair relative to its size. In contrast to the case of divisible public goods (where fractional allocations are permitted), the core is not guaranteed to exist when allocating indivisible public goods. Our primary contributions are the notion of an additive approximation to the core (with a tiny multiplicative loss), and polynomial time algorithms that achieve a small additive approximation, where the additive factor is relative to the largest utility of an agent for an element. If the feasibility constraints define a matroid, we show an additive approximation of 2. A similar approach yields a constant additive bound when the feasibility constraints define a matching. For feasibility constraints defining an arbitrary packing polytope with mild restrictions, we show an additive guarantee that is logarithmic in the width of the polytope. Our algorithms are based on the convex program for maximizing the Nash social welfare, but differ significantly from previous work in how it is used. As far as we are aware, our work is the first to approximate the core in indivisible settings.
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, …
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates …
In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates are located in an underlying metric space. A voter has cost equal to her distance to the winning candidate. Ordinal social choice rules only have access to the ordinal preferences of the voters that are assumed to be consistent with the metric distances. Our goal is to design an ordinal social choice rule with minimum distortion, which is the worst-case ratio, over all consistent metrics, between the social cost of the rule and that of the optimal omniscient rule with knowledge of the underlying metric space. The distortion of the best deterministic social choice rule was known to be between 3 and 5. It had been conjectured that any rule that only looks at the weighted tournament graph on the candidates cannot have distortion better than 5. In our paper, we disprove it by presenting a weighted tournament rule with distortion of 4.236. We design this rule by generalizing the classic notion of uncovered sets, and further show that this class of rules cannot have distortion better than 4.236. We then propose a new voting rule, via an alternative generalization of uncovered sets. We show that if a candidate satisfying the criterion of this voting rule exists, then choosing such a candidate yields a distortion bound of 3, matching the lower bound. We present a combinatorial conjecture that implies distortion of $3$, and verify it for small numbers of candidates and voters by computer experiments. Using our framework, we also show that selecting any candidate guarantees distortion of at most 3 when the weighted tournament graph is cyclically symmetric.
Distortion-based analysis has established itself as a fruitful framework for comparing voting mechanisms. m voters and n candidates are jointly embedded in an (unknown) metric space, and the voters submit …
Distortion-based analysis has established itself as a fruitful framework for comparing voting mechanisms. m voters and n candidates are jointly embedded in an (unknown) metric space, and the voters submit rankings of candidates by non-decreasing distance from themselves. Based on the submitted rankings, the social choice rule chooses a winning candidate; the quality of the winner is the sum of the (unknown) distances to the voters. The rule's choice will in general be suboptimal, and the worst-case ratio between the cost of its chosen candidate and the optimal candidate is called the rule's distortion. It was shown in prior work that every deterministic rule has distortion at least 3, while the Copeland rule and related rules guarantee distortion at most 5; a very recent result gave a rule with distortion 2 + √5 ≈ 4.236.We provide a framework based on LP-duality and flow interpretations of the dual which provides a simpler and more unified way for proving upper bounds on the distortion of social choice rules. We illustrate the utility of this approach with three examples. First, we show that the Ranked Pairs and Schulze rules have distortion Θ(√n). Second, we give a fairly simple proof of a strong generalization of the upper bound of 5 on the distortion of Copeland, to social choice rules with short paths from the winning candidate to the optimal candidate in generalized weak preference graphs. A special case of this result recovers the recent 2 + √5 guarantee. Finally, our framework naturally suggests a combinatorial rule that is a strong candidate for achieving distortion 3, which had also been proposed in recent work. We prove that the distortion bound of 3 would follow from any of three combinatorial conjectures we formulate.
We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose …
We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose a point in C whose total distance from the points in V is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in V, a ranking of its distances to the points in C. We propose algorithms that choose a point in C using only these rankings as input and we provide bounds on their distortion (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where V represents a set of voters, C represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion 3. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion 3, matching a known lower bound. We do so by proving a novel lemma about matching voters to candidates, which we refer to as the ranking-matching lemma. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion 3. We also provide more refined, parameterized, bounds using the notion of decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.
Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine …
Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, …
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, the learner deploys a classifier, then an adversarially chosen agent arrives and possibly manipulates her features to optimally respond to the learner's choice of classifier. The learner has no knowledge of the agents' utility functions or "real" features, which may vary widely across agents. Instead, the learner is only able to observe their "revealed preferences", i.e., the manipulated feature vectors they provide. For a broad family of agent cost functions, we give a computationally efficient learning algorithm that is able to obtain diminishing "Stackelberg regret" --- a form of policy regret that guarantees that the learner is realizing loss nearly as small as that of the best classifier in hindsight, even allowing for the fact that agents would have best-responded differently to the optimal classifier.
We consider high dimensional sparse regression with arbitrary - possibly, severe or coordinated - errors in the covariates matrix. We are interested in understanding how many corruptions we can tolerate, …
We consider high dimensional sparse regression with arbitrary - possibly, severe or coordinated - errors in the covariates matrix. We are interested in understanding how many corruptions we can tolerate, while identifying the correct support. To the best of our knowledge, neither standard outlier rejection techniques, nor recently developed robust regression algorithms (that focus only on corrupted response variables), nor recent algorithms for dealing with stochastic noise or erasures, can provide guarantees on support recovery. As we show, neither can the natural brute force algorithm that takes exponential time to find the subset of data and support columns, that yields the smallest regression error.
We explore the power of a simple idea: replace the essential linear algebraic calculation - the inner product - with a robust counterpart that cannot be greatly affected by a controlled number of arbitrarily corrupted points: the trimmed inner product. We consider three popular algorithms in the uncorrupted setting: Thresholding Regression, Lasso, and the Dantzig selector, and show that the counterparts obtained using the trimmed inner product are provably robust.
We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. …
We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. An MMS allocation provides each agent a bundle worth at least her maximin share. While it is known that such an allocation need not exist [5, 7], a series of remarkable work [1-3, 6, 7] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times her maximin share. More recently, [4] showed the existence of 3/4 MMS allocations and a PTAS to find a 3/4 - ε MMS allocation. Most of the previous works utilize intricate algorithms and require agents' approximate MMS values, which are computationally expensive to obtain.
We study the problem of allocating indivisible goods among agents. When randomization is allowed, it is possible to achieve compelling fairness guarantees such as envy-freeness (Foley, 1967), which states that …
We study the problem of allocating indivisible goods among agents. When randomization is allowed, it is possible to achieve compelling fairness guarantees such as envy-freeness (Foley, 1967), which states that no agent should (in expectation) prefer any other agent's allocation to her own. For instance, we can simply allocate each good, independently of the other goods, to an agent chosen uniformly at random. However, while this scheme is fair ex-ante, it may produce outcomes that are very unfair ex-post, such as by chance assigning all the goods to a single agent. On the other hand, in the absence of randomization, some amount of ex-post unfairness is unavoidable. Nevertheless, it is possible to guarantee relaxations of envy-freeness that bound the maximum level of envy. One popular relaxation is envy-freeness up to one good (Lipton et al., 2004; Budish, 2011), which requires that the envy of any agent toward another agent can be removed by the elimination of at most one good from the envied agent's bundle.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a …
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the …
We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains our design to mechanisms that provide a privacy guarantee to the participants; we use differential privacy to model individuals' privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacy-sensitive individuals do not generalize well to biased estimators. We overcome this challenge through an appropriate design of the computation and payment scheme.
In distortion-based analysis of social choice rules over metric spaces, one assumes that all voters and candidates are jointly embedded in a common metric space. Voters rank candidates by non-decreasing …
In distortion-based analysis of social choice rules over metric spaces, one assumes that all voters and candidates are jointly embedded in a common metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only this ordinal (comparison) information, should select a candidate approximately minimizing the sum of distances from all voters. It is known that while the Copeland rule and related rules guarantee distortion at most 5, many other standard voting rules, such as Plurality, Veto, or $k$-approval, have distortion growing unboundedly in the number $n$ of candidates.
Plurality, Veto, or $k$-approval with small $k$ require less communication from the voters than all deterministic social choice rules known to achieve constant distortion. This motivates our study of the tradeoff between the distortion and the amount of communication in deterministic social choice rules.
We show that any one-round deterministic voting mechanism in which each voter communicates only the candidates she ranks in a given set of $k$ positions must have distortion at least $\frac{2n-k}{k}$; we give a mechanism achieving an upper bound of $O(n/k)$, which matches the lower bound up to a constant. For more general communication-bounded voting mechanisms, in which each voter communicates $b$ bits of information about her ranking, we show a slightly weaker lower bound of $\Omega(n/b)$ on the distortion.
For randomized mechanisms, it is known that Random Dictatorship achieves expected distortion strictly smaller than 3, almost matching a lower bound of $3-\frac{2}{n}$ for any randomized mechanism that only receives each voter's top choice. We close this gap, by giving a simple randomized social choice rule which only uses each voter's first choice, and achieves expected distortion $3-\frac{2}{n}$.
In the committee selection problem, we are given m candidates, and n voters. Candidates can have different weights. A committee is a subset of candidates, and its weight is the …
In the committee selection problem, we are given m candidates, and n voters. Candidates can have different weights. A committee is a subset of candidates, and its weight is the sum of weights of its candidates. Each voter expresses an ordinal ranking over all possible committees. The only assumption we make on preferences is monotonicity: If S ⊆ S′ are two committees, then any voter weakly prefers S′ to S.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximate Core for Committee Selection via Multilinear Extension and Market ClearingKamesh Munagala, Yiheng Shen, …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximate Core for Committee Selection via Multilinear Extension and Market ClearingKamesh Munagala, Yiheng Shen, Kangning Wang, and Zhiyi WangKamesh Munagala, Yiheng Shen, Kangning Wang, and Zhiyi Wangpp.2229 - 2252Chapter DOI:https://doi.org/10.1137/1.9781611977073.89PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Motivated by civic problems such as participatory budgeting and multiwinner elections, we consider the problem of public good allocation: Given a set of indivisible projects (or candidates) of different sizes, and voters with different monotone utility functions over subsets of these candidates, the goal is to choose a budget-constrained subset of these candidates (or a committee) that provides fair utility to the voters. The notion of fairness we adopt is that of core stability from cooperative game theory: No subset of voters should be able to choose another blocking committee of proportionally smaller size that provides strictly larger utility to all voters that deviate. The core provides a strong notion of fairness, subsuming other notions that have been widely studied in computational social choice. It is well-known that an exact core need not exist even when utility functions of the voters are additive across candidates. We therefore relax the problem to allow approximation: Voters can only deviate to the blocking committee if after they choose any extra candidate (called an additament), their utility still increases by an α factor. If no blocking committee exists under this definition, we call this an α-core. Our main result is that an α-core, for α < 67.37, always exists when utilities of the voters are arbitrary monotone submodular functions, and this can be computed in polynomial time. This result improves to α < 9.27 for additive utilities, albeit without the polynomial time guarantee. Our results are a significant improvement over prior work that only shows logarithmic approximations for the case of additive utilities. We complement our results with a lower bound of α > 1.015 for submodular utilities, and a lower bound of any function in the number of voters and candidates for general monotone utilities. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study social choice mechanisms in an implicit utilitarian framework with a metric constraint, where the goal is to minimize Distortion, the worst case social cost of an ordinal mechanism …
We study social choice mechanisms in an implicit utilitarian framework with a metric constraint, where the goal is to minimize Distortion, the worst case social cost of an ordinal mechanism relative to underlying cardinal utilities. We consider two additional desiderata: Constant sample complexity and Squared Distortion. Constant sample complexity means that the mechanism (potentially randomized) only uses a constant number of ordinal queries regardless of the number of voters and alternatives. Squared Distortion is a measure of variance of the Distortion of a randomized mechanism.Our primary contribution is the first social choice mechanism with constant sample complexity and constant Squared Distortion (which also implies constant Distortion). We call the mechanism Random Referee, because it uses a random agent to compare two alternatives that are the favorites of two other random agents. We prove that the use of a comparison query is necessary: no mechanism that only elicits the top-k preferred alternatives of voters (for constant k) can have Squared Distortion that is sublinear in the number of alternatives. We also prove that unlike any top-k only mechanism, the Distortion of Random Referee meaningfully improves on benign metric spaces, using the Euclidean plane as a canonical example. Finally, among top-1 only mechanisms, we introduce Random Oligarchy. The mechanism asks just 3 queries and is essentially optimal among the class of such mechanisms with respect to Distortion.In summary, we demonstrate the surprising power of constant sample complexity mechanisms generally, and just three random voters in particular, to provide some of the best known results in the implicit utilitarian framework.
It is standard in computational social choice to analyse welfare considerations under the assumptions of normalized utilities. In this note, we summarize some common reasons for this approach. We then …
It is standard in computational social choice to analyse welfare considerations under the assumptions of normalized utilities. In this note, we summarize some common reasons for this approach. We then mention another justification which is ignored but has solid normative appeal. The central concept used in the `new' justification can also be used more widely as a social objective.
We study two influential voting rules proposed in the 1890s by Phragmen and Thiele, which elect a committee of k candidates which proportionally represents the voters. Voters provide their preferences …
We study two influential voting rules proposed in the 1890s by Phragmen and Thiele, which elect a committee of k candidates which proportionally represents the voters. Voters provide their preferences by approving an arbitrary number of candidates. Previous work has proposed proportionality axioms satisfied by Thiele's rule (now known as Proportional Approval Voting, PAV) but not by Phragmen's rule. By proposing two new proportionality axioms (laminar proportionality and priceability) satisfied by Phragmen but not Thiele, we show that the two rules achieve two distinct forms of proportional representation. Phragmen's rule ensures that all voters have a similar amount of influence on the committee, and Thiele's rule ensures a fair utility distribution. Thiele's rule is a welfarist voting rule (one that maximizes a function of voter utilities). We show that no welfarist rule can satisfy our new axioms, and we prove that no such rule can satisfy the core. Conversely, some welfarist fairness properties cannot be guaranteed by Phragmen-type rules. This formalizes the difference between the two types of proportionality. We then introduce an attractive committee rule which satisfies a property intermediate between the core and extended justified representation (EJR). It satisfies laminar proportionality, priceability, and is computable in polynomial time. We show that our new rule provides a logarithmic approximation to the core. On the other hand, PAV provides a factor-2 approximation to the core, and this factor is optimal for rules that are fair in the sense of the Pigou--Dalton principle. The full version of the paper is available at http://arxiv.org/pdf/1911.11747.pdf.
Abstract In a bivariate (x, y) scatterplot the three-group resistant line is that line for which the median residual in each outer third of the data (ordered on x) is …
Abstract In a bivariate (x, y) scatterplot the three-group resistant line is that line for which the median residual in each outer third of the data (ordered on x) is zero. It was proposed by Tukey as an exploratory method resistant to outliers in x and y and suited to hand calculation. A dual plot representation of the procedure yields a fast, convergent algorithm, nonparametric confidence intervals for the slope, consistency, influence function, and asymptotic normality results. Monte Carlo results show that small sample efficiencies exceed their asymptotic values in important cases. Both breakdown and efficiency are adequate for exploratory work. Replacing medians by other M-estimators of location increases efficiency substantially without affecting breakdown or computational complexity. The method thus combines bounded influence and acceptable efficiency with conceptual and computational simplicity.
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most …
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a (φ−1)-approximation of envy-freeness up to any good (EFX) and a 2/φ+2 -approximation of groupwise maximin share fairness (GMMS), where φ is the golden ratio. The best known approximation factor, in polynomial time, for either one of these fairness notions prior to this work was 1/2. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a 2/3-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve further the guarantees for GMMS or PMMS.Finally, we show that GMMS—and thus PMMS and EFX—allocations always exist when the number of goods does not exceed the number of agents by more than two.
Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all …
Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a voting rule, a profile of partial orders, and an alternative (candidate) c, two important questions arise: first, is it still possible for c to win, and second, is c guaranteed to win? These are the possible winner and necessary winner problems, respectively. Each of these two problems is further divided into two sub-problems: determining whether c is a unique winner (that is, c is the only winner), or determining whether c is a co-winner (that is, c is in the set of winners). We consider the setting where the number of alternatives is unbounded and the votes are unweighted. We completely characterize the complexity of possible/necessary winner problems for the following common voting rules: a class of positional scoring rules (including Borda), Copeland, maximin, Bucklin, ranked pairs, voting trees, and plurality with runoff.
We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with …
We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with additive valuations (Eisenberg and Gale 1959; Orlin 2010), such equilibria, in general, assign goods fractionally to agents. Hence, Fisher markets are not directly applicable in the context of indivisible goods. In this work we show that one can always bypass this hurdle and, up to a bounded change in agents’ budgets, obtain markets that admit an integral equilibrium. We refer to such markets as pure markets and show that, for any given Fisher market (with additive valuations), one can efficiently compute a “near-by,” pure market with an accompanying integral equilibrium.Our work on pure markets leads to novel algorithmic results for fair division of indivisible goods. Prior work in discrete fair division has shown that, under additive valuations, there always exist allocations that simultaneously achieve the seemingly incompatible properties of fairness and efficiency (Caragiannis et al. 2016); here fairness refers to envyfreeness up to one good (EF1) and efficiency corresponds to Pareto efficiency. However, polynomial-time algorithms are not known for finding such allocations. Considering relaxations of proportionality and EF1, respectively, as our notions of fairness, we show that fair and Pareto efficient allocations can be computed in strongly polynomial time.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always …
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of …
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of a candidate is the sum of its distances to each voter. Some of our work assumes that these points can be modeled on the real line, but other results of ours are more general.
This paper is part of an emerging line of work at the intersection of machine learning and mechanism design, which aims to avoid noise in training data by correctly aligning …
This paper is part of an emerging line of work at the intersection of machine learning and mechanism design, which aims to avoid noise in training data by correctly aligning the incentives of data sources. Specifically, we focus on the ubiquitous problem of linear regression, where strategyproof mechanisms have previously been identified in two dimensions. In our setting, agents have single-peaked preferences and can manipulate only their response variables. Our main contribution is the discovery of a family of group strategyproof linear regression mechanisms in any number of dimensions, which we call generalized resistant hyperplane mechanisms. The game-theoretic properties of these mechanisms --- and, in fact, their very existence --- are established through a connection to a discrete version of the Ham Sandwich Theorem.
We study positional voting rules when candidates and voters are embedded in a common metric space, and cardinal preferences are naturally given by distances in the metric space. In a …
We study positional voting rules when candidates and voters are embedded in a common metric space, and cardinal preferences are naturally given by distances in the metric space. In a positional voting rule, each candidate receives a score from each ballot based on the ballot's rank order; the candidate with the highest total score wins the election. The cost of a candidate is his sum of distances to all voters, and the distortion of an election is the ratio between the cost of the elected candidate and the cost of the optimum candidate. We consider the case when candidates are representative of the population, in the sense that they are drawn i.i.d. from the population of the voters, and analyze the expected distortion of positional voting rules. Our main result is a clean and tight characterization of positional voting rules that have constant expected distortion (independent of the number of candidates and the metric space). Our characterization result immediately implies constant expected distortion for Borda Count and elections in which each voter approves a constant fraction of all candidates. On the other hand, we obtain super-constant expected distortion for Plurality, Veto, and approving a constant number of candidates.These results contrast with previous results on voting with metric preferences: When the candidates are chosen adversarially, all of the preceding voting rules have distortion linear in the number of candidates or voters. Thus, the model of representative candidates allows us to distinguish voting rules which seem equally bad in the worst case.
Abstract In this article we study the effect of each observation in a data set on the minimum sum of absolute errors (MSAE) regression. First, we study the extent to …
Abstract In this article we study the effect of each observation in a data set on the minimum sum of absolute errors (MSAE) regression. First, we study the extent to which the value of a response variable for an observation can be changed without affecting the MSAE regression fit and then show that MSAE regression is less sensitive to certain types of discrepant observations. Second, we study the effect of deleting an observation on parameter estimation. We illustrate the analysis with examples. KEY WORDS: Case analysisDiagnostic techniqueInfluential observation L 1-normLeast absolute valueLinear programmingSensitivity analysis
Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such …
Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such distortions are often imperceptible, but can result in 100% mis-classification for a state of the art DNN. We study the structure of adversarial examples and explore network topology, pre-processing and training strategies to improve the robustness of DNNs. We perform various experiments to assess the removability of adversarial examples by corrupting with additional noise and pre-processing with denoising autoencoders (DAEs). We find that DAEs can remove substantial amounts of the adversarial noise. How- ever, when stacking the DAE with the original DNN, the resulting network can again be attacked by new adversarial examples with even smaller distortion. As a solution, we propose Deep Contractive Network, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE). This increases the network robustness to adversarial examples, without a significant performance penalty.
We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness …
We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness threshold. Specifically, maximin share is defined as the minimum utility that an agent can guarantee for herself when asked to partition the set of goods into n bundles such that the remaining (n-1) agents pick their bundles adversarially. An allocation is deemed to be fair if every agent gets a bundle whose valuation is at least her maximin share. Even though maximin shares provide a natural benchmark for fairness, it has its own drawbacks and, in particular, it is not sufficient to rule out unsatisfactory allocations. Motivated by these considerations, in this work we define a stronger notion of fairness, called groupwise maximin share guarantee (GMMS). In GMMS, we require that the maximin share guarantee is achieved not just with respect to the grand bundle, but also among all the subgroups of agents. Hence, this solution concept strengthens MMS and provides an ex-post fairness guarantee. We show that in specific settings, GMMS allocations always exist. We also establish the existence of approximate GMMS allocations under additive valuations, and develop a polynomial-time algorithm to find such allocations. Moreover, we establish a scale of fairness wherein we show that GMMS implies approximate envy freeness. Finally, we empirically demonstrate the existence of GMMS allocations in a large set of randomly generated instances. For the same set of instances, we additionally show that our algorithm achieves an approximation factor better than the established, worst-case bound.
Journal Article NON-NULL RANKING MODELS. I Get access C. L. MALLOWS C. L. MALLOWS University CollegeLondon Search for other works by this author on: Oxford Academic Google Scholar Biometrika, Volume …
Journal Article NON-NULL RANKING MODELS. I Get access C. L. MALLOWS C. L. MALLOWS University CollegeLondon Search for other works by this author on: Oxford Academic Google Scholar Biometrika, Volume 44, Issue 1-2, June 1957, Pages 114–130, https://doi.org/10.1093/biomet/44.1-2.114 Published: 01 June 1957
In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the …
In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O ( m log m ) and O ( n 3 ), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.
In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating the preferences of …
In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating the preferences of community members to determine an allocation of funds to projects. This problem is different from standard fair resource allocation because of public goods: The allocated goods benefit all users simultaneously. Fairness is crucial in participatory decision making, since generating equitable outcomes is an important goal of these processes. We argue that the classic game theoretic notion of core captures fairness in the setting. To compute the core, we first develop a novel characterization of a public goods market equilibrium called the Lindahl equilibrium, which is always a core solution. We then provide the first (to our knowledge) polynomial time algorithm for computing such an equilibrium for a broad set of utility functions; our algorithm also generalizes (in a non-trivial way) the well-known concept of proportional fairness. We use our theoretical insights to perform experiments on real participatory budgeting voting data. We empirically show that the core can be efficiently computed for utility functions that naturally model our practical setting, and examine the relation of the core with the familiar welfare objective. Finally, we address concerns of incentives and mechanism design by developing a randomized approximately dominant-strategy truthful mechanism building on the exponential mechanism from differential privacy.
We study a participatory budgeting setting in which a single divisible resource (such as money or time) must be divided among a set of projects. For example, participatory budgeting could …
We study a participatory budgeting setting in which a single divisible resource (such as money or time) must be divided among a set of projects. For example, participatory budgeting could be used to decide how to divide a city's tax surplus between its departments of health, education, infrastructure, and parks. A voter might propose a division of the tax surplus among the four departments into the fractions (30%, 40%, 20%, 10%). The city could invite each citizen to submit such a budget proposal, and they could then be aggregated by a suitable mechanism. In this paper, we seek mechanisms of this form that are resistant to manipulation by the voters. In particular, we require that no voter can, by lying, move the aggregate division toward her preference on one alternative without moving it away from her preference by at least as much on other alternatives.
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best …
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by Bogomolnaia et al (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin.
In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome.
In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in Devanur et al (2002).
Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).