Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (56)

We study finite normal-form games under a narrow bracketing assumption: when players play several games simultaneously, they consider each one separately. We show that under mild additional assumptions, players must … We study finite normal-form games under a narrow bracketing assumption: when players play several games simultaneously, they consider each one separately. We show that under mild additional assumptions, players must play either Nash equilibria, logit quantal response equilibria, or their generalizations, which capture players with various risk attitudes.
We revisit a classical question of how individual consumer preferences and incomes shape aggregate behavior. We develop a method that applies to populations with homothetic preferences and reduces the hard … We revisit a classical question of how individual consumer preferences and incomes shape aggregate behavior. We develop a method that applies to populations with homothetic preferences and reduces the hard problem of aggregation to simply computing a weighted average in the space of logarithmic expenditure functions. We apply the method to identify aggregation-invariant preference domains, characterize aggregate preferences from common domains like linear or Leontief, and describe indecomposable preferences that do not correspond to the aggregate behavior of any non-trivial population. Applications include robust welfare analysis, information design, discrete choice models, pseudo-market mechanisms, and preference identification.
We study matching markets with aligned preferences and establish a connection between common design objectives -- stability, efficiency, and fairness -- and the theory of optimal transport. Optimal transport gives … We study matching markets with aligned preferences and establish a connection between common design objectives -- stability, efficiency, and fairness -- and the theory of optimal transport. Optimal transport gives new insights into the structural properties of matchings obtained from pursuing these objectives, and into the trade-offs between different objectives. Matching markets with aligned preferences provide a tractable stylized model capturing supply-demand imbalances in a range of settings such as partnership formation, school choice, organ donor exchange, and markets with transferable utility where bargaining over transfers happens after a match is formed.
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities when monetary transfers are not allowed. The competitive rule is known for its remarkable fairness … We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities when monetary transfers are not allowed. The competitive rule is known for its remarkable fairness and efficiency properties in the case of goods. This rule was extended to chores by Bogomolnaia, Moulin, Sandomirskiy, and Yanovskaya. For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash social welfare on the Pareto frontier of the set of feasible utilities. The Pareto frontier may contain many such points and, consequently, the outcome of the competitive rule is no longer unique. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive utility profile can be constructed via an explicit formula; each candidate can be checked for competitiveness and the allocation can be reconstructed using a maximum flow computation. Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy. Funding: This work was supported by National Science Foundation (CNS 1518941); Lady Davis Fellowship Trust, Hebrew University of Jerusalem; H2020 European Research Council (740435); Linde Institute at Caltech.
We consider a model of Bayesian persuasion with one informed sender and several uninformed receivers. The sender can affect receivers' beliefs via private signals, and the sender's objective depends on … We consider a model of Bayesian persuasion with one informed sender and several uninformed receivers. The sender can affect receivers' beliefs via private signals, and the sender's objective depends on the combination of induced beliefs. We reduce the persuasion problem to the Monge-Kantorovich problem of optimal transportation. Using insights from optimal transportation theory, we identify several classes of multi-receiver problems that admit explicit solutions, get general structural results, derive a dual representation for the value, and generalize the celebrated concavification formula for the value to multi-receiver problems.
The Boltzmann distribution is used in statistical mechanics to describe the distribution of states in systems with a given temperature. We give a novel characterization of this distribution as the … The Boltzmann distribution is used in statistical mechanics to describe the distribution of states in systems with a given temperature. We give a novel characterization of this distribution as the unique one satisfying independence for uncoupled systems. The theorem boils down to a statement about symmetries of the convolution semigroup of finitely supported probability measures on the natural numbers, or, alternatively, about symmetries of the multiplicative semigroup of polynomials with non-negative coefficients.
We investigate inherent stochasticity in individual choice behavior across diverse decisions. Each decision is modeled as a menu of actions with outcomes, and a stochastic choice rule assigns probabilities to … We investigate inherent stochasticity in individual choice behavior across diverse decisions. Each decision is modeled as a menu of actions with outcomes, and a stochastic choice rule assigns probabilities to actions based on the outcome profile. Outcomes can be monetary values, lotteries, or elements of an abstract outcome space. We characterize decomposable rules: those that predict independent choices across decisions not affecting each other. For monetary outcomes, such rules form the one-parametric family of multinomial logit rules. For general outcomes, there exists a universal utility function on the set of outcomes, such that choice follows multinomial logit with respect to this utility. The conclusions are robust to replacing strict decomposability with an approximate version or allowing minor dependencies on the actions' labels. Applications include choice over time, under risk, and with ambiguity.
In a private private information structure, agents' signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are … In a private private information structure, agents' signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.
We consider a model of Bayesian persuasion with one informed sender and several uninformed receivers. The sender can affect receivers' beliefs via private signals and the sender's objective depends on … We consider a model of Bayesian persuasion with one informed sender and several uninformed receivers. The sender can affect receivers' beliefs via private signals and the sender's objective depends on the combination of induced beliefs.
When assets are to be divided among several partners, for example, a partnership split, fair division theory can be used to determine a fair allocation. The applicability of existing approaches … When assets are to be divided among several partners, for example, a partnership split, fair division theory can be used to determine a fair allocation. The applicability of existing approaches is limited as they either treat assets as divisible resources that end up being shared among participants or deal with indivisible objects providing only approximate fairness. In practice, sharing is often possible but undesirable, and approximate fairness is not adequate, particularly for highly valuable assets. In “Efficient Fair Division with Minimal Sharing,” Sandomirskiy and Segal-Halevi introduce a novel approach offering a middle ground: the number of shared objects is minimized while maintaining exact fairness and economic efficiency. This minimization can be conducted in polynomial time for generic instances if the number of agents or objects is fixed. Experiments on real data demonstrate a substantial improvement over current methods.
We consider the problem of revenue-maximizing Bayesian auction design with several bidders having independent private values over several items. We show that it can be reduced to the problem of … We consider the problem of revenue-maximizing Bayesian auction design with several bidders having independent private values over several items. We show that it can be reduced to the problem of continuous optimal transportation introduced by Beckmann (1952) where the optimal transportation flow generalizes the concept of ironed virtual valuations to the multi-item setting. We establish the strong duality between the two problems and the existence of solutions. The results rely on insights from majorization and optimal transportation theories and on the characterization of feasible interim mechanisms by Hart and Reny (2015).
We study efficiency in general collective choice problems where agents have ordinal preferences and randomization is allowed. We explore the structure of preference profiles where ex-ante and ex-post efficiency coincide, … We study efficiency in general collective choice problems where agents have ordinal preferences and randomization is allowed. We explore the structure of preference profiles where ex-ante and ex-post efficiency coincide, offer a unifying perspective on the known results, and give several new characterizations. The results have implications for well-studied mechanisms including random serial dictatorship and a number of specific environments, including the dichotomous, single-peaked, and social choice domains.
An informed sender communicates with an uninformed receiver through a sequence of uninformed mediators; agents' utilities depend on receiver's action and the state. For any number of mediators, the sender's … An informed sender communicates with an uninformed receiver through a sequence of uninformed mediators; agents' utilities depend on receiver's action and the state. For any number of mediators, the sender's optimal value is characterized. For one mediator, the characterization has a geometric meaning of constrained concavification of sender's utility, optimal persuasion requires the same number of signals as without mediators, and the presence of the mediator is never profitable for the sender. Surprisingly, the second mediator may improve the value but optimal persuasion may require more signals.
Bayes-rational agents reside on a social network. They take binary actions sequentially and irrevocably, and the right action depends on an unobservable state. Each agent receives a bounded private signal … Bayes-rational agents reside on a social network. They take binary actions sequentially and irrevocably, and the right action depends on an unobservable state. Each agent receives a bounded private signal about the realized state and observes the actions taken by the neighbors who acted before. How does the network topology affect the ability of agents to aggregate the information dispersed over the population by means of the private signals?
A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. … A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. The disutility of a voter is proportional to the fraction of issues, where his preferences disagree with the decision. While an issue-by-issue vote by all voters would maximize social welfare, we are interested in how well the preferences of the population can be approximated by a small committee.
 We show that a k-sortition (a random committee of k voters with the majority vote within the committee) leads to an outcome within the factor 1+O(1/√ k) of the optimal social cost for any number of voters n, any number of issues m, and any preference profile. For a small number of issues m, the social cost can be made even closer to optimal by delegation procedures that weigh committee members according to their number of followers. However, for large m, we demonstrate that the k-sortition is the worst-case optimal rule within a broad family of committee-based rules that take into account metric information about the preference profile of the whole population.
The recent literature on fair Machine Learning manifests that the choice of fairness constraints must be driven by the utilities of the population. However, virtually all previous work makes the … The recent literature on fair Machine Learning manifests that the choice of fairness constraints must be driven by the utilities of the population. However, virtually all previous work makes the unrealistic assumption that the exact underlying utilities of the population (representing private tastes of individuals) are known to the regulator that imposes the fairness constraint. In this paper we initiate the discussion of the \emph{mismatch}, the unavoidable difference between the underlying utilities of the population and the utilities assumed by the regulator. We demonstrate that the mismatch can make the disadvantaged protected group worse off after imposing the fairness constraint and provide tools to design fairness constraints that help the disadvantaged group despite the mismatch.
We study the set of possible joint posterior belief distributions of a group of agents who share a common prior regarding a binary state and who observe some information structure. … We study the set of possible joint posterior belief distributions of a group of agents who share a common prior regarding a binary state and who observe some information structure. For two agents, we introduce a quantitative version of Aumann's agreement theorem and show that it is equivalent to a characterization of feasible distributions from a 1995 work by Dawid and colleagues. For any number of agents, we characterize feasible distributions in terms of a "no-trade" condition. We use these characterizations to study information structures with independent posteriors. We also study persuasion problems with multiple receivers, exploring the extreme feasible distributions.
Ann likes oranges much more than apples; Bob likes apples much more than oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal … Ann likes oranges much more than apples; Bob likes apples much more than oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal probability. Giving one half to each agent is fair for each realization of the fruit. However, agreeing that whatever fruit appears will go to the agent who likes it more gives a higher expected utility to each agent and is fair in the average sense: in expectation, each agent prefers the allocation to the equal division of the fruit; that is, the agent gets a fair share. We turn this familiar observation into an economic design problem: upon drawing a random object (the fruit), we learn the realized utility of each agent and can compare it to the mean of the agent’s distribution of utilities; no other statistical information about the distribution is available. We fully characterize the division rules using only this sparse information in the most efficient possible way while giving everyone a fair share. Although the probability distribution of individual utilities is arbitrary and mostly unknown to the manager, these rules perform in the same range as the best rule when the manager has full access to this distribution. This paper was accepted by Ilia Tsetlin, behavioral economics and decision analysis.
A private private information structure delivers information about an unknown state while preserving privacy: An agent's signal contains information about the state but remains independent of others' sensitive or private … A private private information structure delivers information about an unknown state while preserving privacy: An agent's signal contains information about the state but remains independent of others' sensitive or private information. We study how informative such structures can be, and characterize those that are optimal in the sense that they cannot be made more informative without violating privacy. We connect our results to fairness in recommendation systems and explore a number of further applications.
We study the set of possible joint posterior belief distributions of a group of agents who share a common prior regarding a binary state and who observe some information structure. … We study the set of possible joint posterior belief distributions of a group of agents who share a common prior regarding a binary state and who observe some information structure. Our main result is that, for the two-agent case, a quantitative version of Aumann's Agreement Theorem provides a necessary and sufficient condition for feasibility. For any number of agents, a related "no-trade" condition likewise provides a characterization of feasibility. We use our characterization to construct joint belief distributions in which agents are informed regarding the state, and yet receive no information regarding the other's posterior. We study a related class of Bayesian persuasion problems with a single sender and multiple receivers, and explore the extreme points of the set of feasible distributions.
Ann likes oranges much more than apples; Bob likes apples much more than oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal … Ann likes oranges much more than apples; Bob likes apples much more than oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal probability. Giving one half to each agent is fair for each realization of the fruit. However, agreeing that whatever fruit appears will go to the agent who likes it more gives a higher expected utility to each agent and is fair in the average sense: in expectation, each agent prefers his allocation to the equal division of the fruit, i.e., he gets a fair share. We turn this familiar observation into an economic design problem: upon drawing a random object (the fruit), we learn the realized utility of each agent and can compare it to the mean of his distribution of utilities; no other statistical information about the distribution is available. We fully characterize the division rules using only this sparse information in the most efficient possible way, while giving everyone a fair share. Although the probability distribution of individual utilities is arbitrary and mostly unknown to the manager, these rules perform in the same range as the best rule when the manager has full access to this distribution.
Ann likes oranges and dislikes apples; Bob likes apples and dislikes oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal probability 0.5. … Ann likes oranges and dislikes apples; Bob likes apples and dislikes oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal probability 0.5. Giving to each half of that fruit is fair for each realisation of the fruit; but agreeing that whatever fruit appears will go to the agent who likes it more gives a higher expected utility to each agent and is fair in the average sense: in expectation, each agent prefers his allocation to the equal division of the object, he gets a We turn this familiar observation into an economic design problem: upon drawing a random object (the fruit), we learn the realised utility of each agent and can compare it to the mean of his distribution of utilities; no other statistical information about the distribution is available. We fully characterize the division rules that use only this sparse information in the most efficient possible way, while giving everyone a Fair Share. Although the probability distribution of individual utilities is arbitrary and mostly unknown to the designer, these rules perform in the same range as the best fair rule having full knowledge of this distribution.
A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. … A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. The disutility of a voter is proportional to the fraction of issues, where his preferences disagree with the decision. While an issue-by-issue vote by all voters would maximize social welfare, we are interested in how well the preferences of the population can be approximated by a small committee. We show that a k-sortition (a random committee of k voters with the majority vote within the committee) leads to an outcome within the factor 1+O(1/k) of the optimal social cost for any number of voters n, any number of issues $m$, and any preference profile. For a small number of issues m, the social cost can be made even closer to optimal by delegation procedures that weigh committee members according to their number of followers. However, for large m, we demonstrate that the k-sortition is the worst-case optimal rule within a broad family of committee-based rules that take into account metric information about the preference profile of the whole population.
We study the set of possible joint posterior belief distributions of a group of agents who share a common prior regarding a binary state, and who observe some information structure. … We study the set of possible joint posterior belief distributions of a group of agents who share a common prior regarding a binary state, and who observe some information structure. For two agents we introduce a quantitative version of Aumann's Agreement Theorem, and show that it is equivalent to a characterization of feasible distributions due to Dawid et al. (1995). For any number of agents, we characterize feasible distributions in terms of a "no-trade" condition. We use these characterizations to study information structures with independent posteriors. We also study persuasion problems with multiple receivers, exploring the extreme feasible distributions.
It is well understood that the structure of a social network is critical to whether or not agents can aggregate information correctly. In this paper, we study social networks that … It is well understood that the structure of a social network is critical to whether or not agents can aggregate information correctly. In this paper, we study social networks that support information aggregation when rational agents act sequentially and irrevocably. Whether or not information is aggregated depends, inter alia, on the order in which agents decide. Thus, to decouple the order and the topology, our model studies a random arrival order. Unlike the case of a fixed arrival order, in our model, the decision of an agent is unlikely to be affected by those who are far from him in the network. This observation allows us to identify a local learning requirement, a natural condition on the agent's neighborhood that guarantees that this agent makes the correct decision (with high probability) no matter how well other agents perform. Roughly speaking, the agent should belong to a multitude of mutually exclusive social circles. We illustrate the power of the local learning requirement by constructing a family of social networks that guarantee information aggregation despite that no agent is a social hub (in other words, there are no opinion leaders). Although the common wisdom of the social learning literature suggests that information aggregation is very fragile, another application of the local learning requirement demonstrates the existence of networks where learning prevails even if a substantial fraction of the agents are not involved in the learning process. On a technical level, the networks we construct rely on the theory of expander graphs, i.e., highly connected sparse graphs with a wide range of applications from pure mathematics to error-correcting codes.
It is well understood that the structure of a social network is critical to whether or not agents can aggregate information correctly. In this paper, we study social networks that … It is well understood that the structure of a social network is critical to whether or not agents can aggregate information correctly. In this paper, we study social networks that support information aggregation when rational agents act sequentially and irrevocably. Whether or not information is aggregated depends, inter alia, on the order in which agents decide. Thus, to decouple the order and the topology, our model studies a random arrival order.Unlike the case of a fixed arrival order, in our model, the decision of an agent is unlikely to be affected by those who are far from him in the network. This observation allows us to identify a local learning requirement, a natural condition on the agent's neighborhood that guarantees that this agent makes the correct decision (with high probability) no matter how well other agents perform. Roughly speaking, the agent should belong to a multitude of mutually exclusive social circles.We illustrate the power of the local learning requirement by constructing a family of social networks that guarantee information aggregation despite that no agent is a social hub (in other words, there are no opinion leaders). Although the common wisdom of the social learning literature suggests that information aggregation is very fragile, another application of the local learning requirement demonstrates the existence of networks where learning prevails even if a substantial fraction of the agents are not involved in the learning process. On a technical level, the networks we construct rely on the theory of expander graphs, i.e., highly connected sparse graphs with a wide range of applications from pure mathematics to error-correcting codes.
A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. … A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. The disutility of a voter is proportional to the fraction of issues, where his preferences disagree with the decision. While an issue-by-issue vote by all voters would maximize social welfare, we are interested in how well the preferences of the population can be approximated by a small committee. We show that a k-sortition (a random committee of k voters with the majority vote within the committee) leads to an outcome within the factor 1+O(1/k) of the optimal social cost for any number of voters n, any number of issues $m$, and any preference profile. For a small number of issues m, the social cost can be made even closer to optimal by delegation procedures that weigh committee members according to their number of followers. However, for large m, we demonstrate that the k-sortition is the worst-case optimal rule within a broad family of committee-based rules that take into account metric information about the preference profile of the whole population.
We consider fair allocation of indivisible items under additive utilities. When the utilities can be negative, the existence and complexity of an allocation that satisfies Pareto optimality and proportionality up … We consider fair allocation of indivisible items under additive utilities. When the utilities can be negative, the existence and complexity of an allocation that satisfies Pareto optimality and proportionality up to one item (PROP1) is an open problem. We show that there exists a strongly polynomial-time algorithm that always computes an allocation satisfying Pareto optimality and proportionality up to one item even if the utilities are mixed and the agents have asymmetric weights. We point out that the result does not hold if either of Pareto optimality or PROP1 is replaced with slightly stronger concepts.
A set of objects, some goods and some bads, is to be divided fairly among agents with different tastes, modeled by additive utility-functions. If the objects cannot be shared, so … A set of objects, some goods and some bads, is to be divided fairly among agents with different tastes, modeled by additive utility-functions. If the objects cannot be shared, so that each of them must be entirely allocated to a single agent, then fair division may not exist. What is the smallest number of objects that must be shared between two or more agents in order to attain a fair division? We focus on Pareto-optimal, envy-free and/or proportional allocations. We show that, for a generic instance of the problem --- all instances except of a zero-measure set of degenerate problems --- a fair and Pareto-optimal division with the smallest possible number of shared objects can be found in polynomial time, assuming that the number of agents is fixed. The problem becomes computationally hard for degenerate instances, where the agents' valuations are aligned for many objects.
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best … We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by Bogomolnaia et al (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in Devanur et al (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
We consider fair allocation of indivisible items under additive utilities. When the utilities can be negative, the existence and complexity of an allocation that satisfies Pareto optimality and proportionality up … We consider fair allocation of indivisible items under additive utilities. When the utilities can be negative, the existence and complexity of an allocation that satisfies Pareto optimality and proportionality up to one item (PROP1) is an open problem. We show that there exists a strongly polynomial-time algorithm that always computes an allocation satisfying Pareto optimality and proportionality up to one item even if the utilities are mixed and the agents have asymmetric weights. We point out that the result does not hold if either of Pareto optimality or PROP1 is replaced with slightly stronger concepts.
A collection of objects, some of which are good and some are bad, is to be divided fairly among agents with different tastes, modeled by additive utility functions. If the … A collection of objects, some of which are good and some are bad, is to be divided fairly among agents with different tastes, modeled by additive utility functions. If the objects cannot be shared, so that each of them must be entirely allocated to a single agent, then a fair division may not exist. What is the smallest number of objects that must be shared between two or more agents in order to attain a fair and efficient division? In this paper, fairness is understood as proportionality or envy-freeness, and efficiency, as fractional Pareto-optimality. We show that, for a generic instance of the problem (all instances except a zero-measure set of degenerate problems), a fair fractionally Pareto-optimal division with the smallest possible number of shared objects can be found in polynomial time, assuming that the number of agents is fixed. The problem becomes computationally hard for degenerate instances, where agents' valuations are aligned for many objects.
Ann likes oranges much more than apples; Bob likes apples much more than oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal … Ann likes oranges much more than apples; Bob likes apples much more than oranges. Tomorrow they will receive one fruit that will be an orange or an apple with equal probability. Giving one half to each agent is fair for each realization of the fruit. However, agreeing that whatever fruit appears will go to the agent who likes it more gives a higher expected utility to each agent and is fair in the average sense: in expectation, each agent prefers his allocation to the equal division of the fruit, i.e., he gets a fair share. We turn this familiar observation into an economic design problem: upon drawing a random object (the fruit), we learn the realized utility of each agent and can compare it to the mean of his distribution of utilities; no other statistical information about the distribution is available. We fully characterize the division rules using only this sparse information in the most efficient possible way, while giving everyone a fair share. Although the probability distribution of individual utilities is arbitrary and mostly unknown to the manager, these rules perform in the same range as the best rule when the manager has full access to this distribution.
Machine Learning (ML) algorithms shape our lives. Banks use them to determine if we are good borrowers; IT companies delegate them recruitment decisions; police apply ML for crime-prediction, and judges … Machine Learning (ML) algorithms shape our lives. Banks use them to determine if we are good borrowers; IT companies delegate them recruitment decisions; police apply ML for crime-prediction, and judges base their verdicts on ML. However, real-world examples show that such automated decisions tend to discriminate against protected groups. This potential discrimination generated a huge hype both in media and in the research community. Quite a few formal notions of fairness were proposed, which take a form of constraints a "fair" algorithm must satisfy. We focus on scenarios where fairness is imposed on a self-interested party (e.g., a bank that maximizes its revenue). We find that the disadvantaged protected group can be worse off after imposing a fairness constraint. We introduce a family of \textit{Welfare-Equalizing} fairness constraints that equalize per-capita welfare of protected groups, and include \textit{Demographic Parity} and \textit{Equal Opportunity} as particular cases. In this family, we characterize conditions under which the fairness constraint helps the disadvantaged group. We also characterize the structure of the optimal \textit{Welfare-Equalizing} classifier for the self-interested party, and provide an algorithm to compute it. Overall, our \textit{Welfare-Equalizing} fairness approach provides a unified framework for discussing fairness in classification in the presence of a self-interested party.
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities when monetary transfers are not allowed. The competitive rule is known for its remarkable fairness … We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities when monetary transfers are not allowed. The competitive rule is known for its remarkable fairness and efficiency properties in the case of goods. This rule was extended to chores in prior work by Bogomolnaia, Moulin, Sandomirskiy, and Yanovskaya (2017). The rule produces Pareto optimal and envy-free allocations for both goods and chores. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin (2010). In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash social welfare on the Pareto frontier of the set of feasible utilities. The Pareto frontier may contain many such points; consequently, the competitive rule's outcome is no longer unique. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive utility profile can be constructed via an explicit formula; each candidate can be checked for competitiveness, and the allocation can be reconstructed using a maximum flow computation. Our algorithm gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all … A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of profiles), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility profile (before any manna) is Pareto dominated, the competitive profile is unique and still maximizes the product of utilities. If the zero profile is unfeasible, the competitive profiles are the critical points of the product of disutilities on the efficiency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We refine our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly benefits under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Efficient divisions of bads can be disconnected so that it will admit no continuous selection.
A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If … A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree 1 and concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale–Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous, and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance, if all items are bads), the competitive profiles are strictly negative and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single-valued selection can be continuous or resource monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.
A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all … A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of profiles), single-valued and easy to compute.
A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all … A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of profiles), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility profile (before any manna) is Pareto dominated, the competitive profile is unique and still maximizes the product of utilities. If the zero profile is unfeasible, the competitive profiles are the critical points of the product of disutilities on the efficiency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We refine our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly benefits under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Efficient divisions of bads can be disconnected so that it will admit no continuous selection.
We compare the Egalitarian Equivalent and the Competitive Equilibrium with Equal Incomes rules to divide a bundle of goods (heirlooms) or a bundle of bads (chores). For goods the Competitive … We compare the Egalitarian Equivalent and the Competitive Equilibrium with Equal Incomes rules to divide a bundle of goods (heirlooms) or a bundle of bads (chores). For goods the Competitive division fares better, as it is Resource Monotonic, and makes it harder to strategically misreport preferences. But for bads, the Competitive rule, unlike the Egalitarian one, is multivalued, harder to compute, and admits no continuous selection. We also provide an axiomatic characterization of the Competitive rule based on the simple formulation of Maskin Monotonicity under additive utilities.
We consider repeated zero-sum games with incomplete information on the side of Player 2 with the total payoff given by the non-normalized sum of stage gains. In the classical examples … We consider repeated zero-sum games with incomplete information on the side of Player 2 with the total payoff given by the non-normalized sum of stage gains. In the classical examples the value of such an N-stage game is of the order of N or of square root of N, as N tends to infinity.
The Competitive Equilibrium with Equal Incomes is an especially appealing efficient and envy-free division of private goods when utilities are additive: it maximizes the Nash product of utilities and is … The Competitive Equilibrium with Equal Incomes is an especially appealing efficient and envy-free division of private goods when utilities are additive: it maximizes the Nash product of utilities and is single-valued and continuous in the marginal rates of substitution. The CEEI to divide bads captures similarly the critical points of the Nash product in the efficient frontier. But it is far from resolute, allowing routinely many divisions with sharply different welfare consequences.
When utilities are additive, we uncovered in our previous paper (Bogomolnaia et al. "Dividing Goods or Bads under Additive Utilities") many similarities but also surprising differences in the behavior of … When utilities are additive, we uncovered in our previous paper (Bogomolnaia et al. "Dividing Goods or Bads under Additive Utilities") many similarities but also surprising differences in the behavior of the familiar Competitive rule (with equal incomes), when we divide (private) goods or bads. The rule picks in both cases the critical points of the product of utilities (or disutilities) on the efficiency frontier, but there is only one such point if we share goods, while there can be exponentially many in the case of bads.
We compare the Egalitarian Equivalent and the Competitive Equilibrium with Equal Incomes rules to divide a bundle of goods (heirlooms) or a bundle of bads (chores). For goods the Competitive … We compare the Egalitarian Equivalent and the Competitive Equilibrium with Equal Incomes rules to divide a bundle of goods (heirlooms) or a bundle of bads (chores). For goods the Competitive division fares better, as it is Resource Monotonic, and makes it harder to strategically misreport preferences. But for bads, the Competitive rule, unlike the Egalitarian one, is multivalued, harder to compute, and admits no continuous selection. We also provide an axiomatic characterization of the Competitive rule based on the simple formulation of Maskin Monotonicity under additive utilities.
When utilities are additive, we uncovered in our previous paper (Bogomolnaia et al. "Dividing Goods or Bads under Additive Utilities") many similarities but also surprising differences in the behavior of … When utilities are additive, we uncovered in our previous paper (Bogomolnaia et al. "Dividing Goods or Bads under Additive Utilities") many similarities but also surprising differences in the behavior of the familiar Competitive rule (with equal incomes), when we divide (private) goods or bads. The rule picks in both cases the critical points of the product of utilities (or disutilities) on the efficiency frontier, but there is only one such point if we share goods, while there can be exponentially many in the case of bads. We extend this analysis to the fair division of mixed items: each item can be viewed by some participants as a good and by others as a bad, with corresponding positive or negative marginal utilities. We find that the division of mixed items boils down, normatively as well as computationally, to a variant of an all goods problem, or of an all bads problem: in particular the task of dividing the non disposable items must be either good news for everyone, or bad news for everyone. If at least one feasible utility profile is positive, the Competitive rule picks the unique maximum of the product of (positive) utilities. If no feasible utility profile is positive, this rule picks all critical points of the product of disutilities on the efficient frontier.

Commonly Cited References

Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted … Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)
A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If … A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree 1 and concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale–Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous, and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance, if all items are bads), the competitive profiles are strictly negative and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single-valued selection can be continuous or resource monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.
This work is devoted to extend several asymptotic results concerning repeated games with incomplete information on one side. The model we consider is a generalization of the classical model of … This work is devoted to extend several asymptotic results concerning repeated games with incomplete information on one side. The model we consider is a generalization of the classical model of Aumann and Maschler (Aumann et al. [Aumann RJ, Maschler M, Stearns RE (1995) Repeated Games with Incomplete Information (MIT Press, Cambridge, MA)]) to infinite action spaces and partial information. We prove an extension of the classical “Cav(u)” Theorem in this model for both the lower and upper value functions using two different methods: respectively a probabilistic method based on martingales and a functional one based on approximation schemes for viscosity solutions of Hamilton Jacobi equations similar to the dual differential approach of Laraki [Laraki R (2002) Repeated games with lack of information on one side: The dual differential approach. Math. Oper. Res. 27(2):419–440]. Moreover, we show that solutions of these two asymptotic problems provide asymptotically optimal strategies for both players in any game of length n. All these results are based on a compact approach, which consists in identifying a continuous-time problem defined on the time interval [0,1] representing the “limit” of a sequence of finitely repeated games, as the number of repetitions is going to infinity. Finally, our results imply the existence of the uniform value of the infinitely repeated game whenever the value of the non-revealing game exists.
Mertens and Zamir's paper [3]is concerned with the asymptotic behavior of the maximal L1-variation ξ1n(p) of a [0,1]-valued martingale of length n starting at p. They prove the convergence of … Mertens and Zamir's paper [3]is concerned with the asymptotic behavior of the maximal L1-variation ξ1n(p) of a [0,1]-valued martingale of length n starting at p. They prove the convergence of ξ1n(p)/√n to the normal density evaluated at its p-quantile. This paper generalizes this result to the conditional Lq-variation for q ∈ [1, 2). The appearance of the normal density remained unexplained in Mertens and Zamir's proof: it appeared as the solution of a differential equation. Our proof however justifies this normal density as a consequence of a generalization of the central limit theorem discussed in the second part of this paper. L'article [3]de Mertens et Zamir s'intéresse au comportement asymptotique de la variation maximale ξ1n(p) au sens L1 d'une martingale de longueur n issue de p et à valeurs dans [0, 1]. Ils démontrent que ξ1n(p)/√n converge vers la densité normale évaluée à son p-quantile. Ce résultat est ici étendu à la variation Lq-conditionnelle pour q ∈ [1, 2). L'apparition de la loi normale reste inexpliquée au terme de la démonstration de Mertens et Zamir : elle y apparaît en tant que solution d'une équation différentielle. Notre preuve justifie l'occurrence de la densité normale comme une conséquence d'une généralisation du Théorème Central Limite présentée dans la deuxième partie de l'article.
We consider two person zero-sum games with lack of information on one side given bym matrices of dimensionm×m. We suppose the matrices to have the following “symmetric” structure:a =a ij … We consider two person zero-sum games with lack of information on one side given bym matrices of dimensionm×m. We suppose the matrices to have the following “symmetric” structure:a =a ij +cδ ,c>0, whereδ =1 ifi=s andδ =0 otherwise. Under certain additional assumptions we give the explicit solution for finite repetitions of these games. These solutions are expressed in terms of multinomial distributions. We give the probabilisitc arguments which explain the obtained form of solutions. Applying the Central Limit Theorem we get the description of limiting behavior of value closely connected with the recent results of De Meyer [1989], [1993].
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
A set of objects, some goods and some bads, is to be divided fairly among agents with different tastes, modeled by additive utility-functions. If the objects cannot be shared, so … A set of objects, some goods and some bads, is to be divided fairly among agents with different tastes, modeled by additive utility-functions. If the objects cannot be shared, so that each of them must be entirely allocated to a single agent, then fair division may not exist. What is the smallest number of objects that must be shared between two or more agents in order to attain a fair division? We focus on Pareto-optimal, envy-free and/or proportional allocations. We show that, for a generic instance of the problem --- all instances except of a zero-measure set of degenerate problems --- a fair and Pareto-optimal division with the smallest possible number of shared objects can be found in polynomial time, assuming that the number of agents is fixed. The problem becomes computationally hard for degenerate instances, where the agents' valuations are aligned for many objects.
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, … We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best … We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by Bogomolnaia et al (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in Devanur et al (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with … We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with additive valuations (Eisenberg and Gale 1959; Orlin 2010), such equilibria, in general, assign goods fractionally to agents. Hence, Fisher markets are not directly applicable in the context of indivisible goods. In this work we show that one can always bypass this hurdle and, up to a bounded change in agents’ budgets, obtain markets that admit an integral equilibrium. We refer to such markets as pure markets and show that, for any given Fisher market (with additive valuations), one can efficiently compute a “near-by,” pure market with an accompanying integral equilibrium.Our work on pure markets leads to novel algorithmic results for fair division of indivisible goods. Prior work in discrete fair division has shown that, under additive valuations, there always exist allocations that simultaneously achieve the seemingly incompatible properties of fairness and efficiency (Caragiannis et al. 2016); here fairness refers to envyfreeness up to one good (EF1) and efficiency corresponds to Pareto efficiency. However, polynomial-time algorithms are not known for finding such allocations. Considering relaxations of proportionality and EF1, respectively, as our notions of fairness, we show that fair and Pareto efficient allocations can be computed in strongly polynomial time.
Let v n (p) denote the value of the n-times repeated zero-sum game with incomplete information on one side and full monitoring and let u(p) be the value of the … Let v n (p) denote the value of the n-times repeated zero-sum game with incomplete information on one side and full monitoring and let u(p) be the value of the average game G(p). The error term ϵ n (p) = v n (p) − cav(u)(p) is then converging to zero at least as rapidly as 1/√n. In this paper, we analyze the convergence of ψ n (p) = √nϵ n (p) in the games with square payoff matrices such that the optimal strategy of the informed player in the average game G(p) is unique, is completely mixed and does not depend on p. Our main result is that the existence of a solution ψ* to a partial differential equation with appropriate boundary conditions and regularity properties implies the uniform convergence of ψ n to the Fenchel conjugate of ψ*. In particular cases, the P.D.E. problem is linear and its solution ψ* is then related to the multidimensional normal distribution.
When utilities are additive, we uncovered in our previous paper (Bogomolnaia et al. "Dividing Goods or Bads under Additive Utilities") many similarities but also surprising differences in the behavior of … When utilities are additive, we uncovered in our previous paper (Bogomolnaia et al. "Dividing Goods or Bads under Additive Utilities") many similarities but also surprising differences in the behavior of the familiar Competitive rule (with equal incomes), when we divide (private) goods or bads. The rule picks in both cases the critical points of the product of utilities (or disutilities) on the efficiency frontier, but there is only one such point if we share goods, while there can be exponentially many in the case of bads.
We initiate the study of indivisible chore allocation for agents with asymmetric shares. The fairness concept we focus on is the weighted natural generalization of maxmin share: WMMS fairness and … We initiate the study of indivisible chore allocation for agents with asymmetric shares. The fairness concept we focus on is the weighted natural generalization of maxmin share: WMMS fairness and OWMMS fairness. We first highlight the fact that commonly-used algorithms that work well for allocation of goods to asymmetric agents, and even for chores to symmetric agents do not provide good approximations for allocation of chores to asymmetric agents under WMMS. As a consequence, we present a novel polynomial-time constant-approximation algorithm, via linear program, for OWMMS. For two special cases: the binary valuation case and the 2-agent case, we provide exact or better constant-approximation algorithms.
First an integral representation of a continuous linear functional dominated by a support function in integral form is given (Theorem 1). From this the theorem of Blackwell-Stein-Sherman-Cartier [2], [20], [4], … First an integral representation of a continuous linear functional dominated by a support function in integral form is given (Theorem 1). From this the theorem of Blackwell-Stein-Sherman-Cartier [2], [20], [4], is deduced as well as a result on capacities alternating of order 2 in the sense of Choquet [5], which includes Satz 4.3 of [23] and a result of Kellerer [10], [12], under somewhat stronger assumptions. Next (Theorem 7), the existence of probability distributions with given marginals is studied under typically weaker assumptions, than those which are required by the use of Theorem 1. As applications we derive necessary and sufficient conditions for a sequence of probability measures to be the sequence of distributions of a martingale (Theorem 8), an upper semi-martingale (Theorem 9) or of partial sums of independent random variables (Theorem 10). Moreover an alternative definition of Levy-Prokhorov's distance between probability measures in a complete separable metric space is obtained (corollary of Theorem 11). Section 6 can be read independently of the former sections.
We study the problem of allocating a set of indivisible items to agents with additive utilities to maximize the Nash social welfare. Cole and Gkatzelis recently proved that this problem … We study the problem of allocating a set of indivisible items to agents with additive utilities to maximize the Nash social welfare. Cole and Gkatzelis recently proved that this problem admits a constant factor approximation. We complement their result by showing that this problem is APX-hard.
We consider a multi-agent resource allocation setting that models the assignment of papers to reviewers. A recurring issue in allocation problems is the compatibility of welfare/efficiency and fairness. Given an … We consider a multi-agent resource allocation setting that models the assignment of papers to reviewers. A recurring issue in allocation problems is the compatibility of welfare/efficiency and fairness. Given an oracle to find a welfare-achieving allocation, we embed such an oracle into a flexible algorithm called the Constrained Round Robin (CRR) algorithm, that achieves the required welfare level. Our algorithm also allows the system designer to lower the welfare requirements in order to achieve a higher degree of fairness. If the welfare requirement is lowered enough, a strengthening of envy-freeness up to one item is guaranteed. Hence, our algorithm can be viewed as a computationally efficient way to interpolate between welfare and approximate envy-freeness in allocation problems.
We show that if $f$ is a probability density on $R^n$ wrt Lebesgue measure (or any absolutely continuous measure) and $0 \leq f \leq 1$, then there is another density … We show that if $f$ is a probability density on $R^n$ wrt Lebesgue measure (or any absolutely continuous measure) and $0 \leq f \leq 1$, then there is another density $g$ with only the values 0 and 1 and with the same $(n - 1)$-dimensional marginals in any finite number of directions. This sharpens, unifies and extends the results of Lorentz and of Kellerer. Given a pair of independent random variables $0 \leq X,Y \leq 1$, we further study functions $0 \leq \phi \leq 1$ such that $Z = \phi(X,Y)$ satisfies $E(Z\mid X) = X$ and $E(Z\mid Y) = Y$. If there is a solution then there also is a nondecreasing solution $\phi(x,y)$. These results are applied to tomography and baseball.
We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is … We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is on the case where the goods are indivisible and agents have unequal entitlements. This problem is a generalization of the work by Procaccia and Wang (2014) wherein the agents are assumed to be symmetric with respect to their entitlements. Although Procaccia and Wang show an almost fair (constant approximation) allocation exists in their setting, our main result is in sharp contrast to their observation. We show that, in some cases with n agents, no allocation can guarantee better than 1/n approximation of a fair allocation when the entitlements are not necessarily equal. Furthermore, we devise a simple algorithm that ensures a 1/n approximation guarantee.
 Our second result is for a restricted version of the problem where the valuation of every agent for each good is bounded by the total value he wishes to receive in a fair allocation. Although this assumption might seem without loss of generality, we show it enables us to find a 1/2 approximation fair allocation via a greedy algorithm. Finally, we run some experiments on real-world data and show that, in practice, a fair allocation is likely to exist. We also support our experiments by showing positive results for two stochastic variants of the problem, namely stochastic agents and stochastic items.
We define and investigate a property of mechanisms that we call “strategic simplicity,” and that is meant to capture the idea that, in strategically simple mechanisms, strategic choices require limited … We define and investigate a property of mechanisms that we call “strategic simplicity,” and that is meant to capture the idea that, in strategically simple mechanisms, strategic choices require limited strategic sophistication. We define a mechanism to be strategically simple if choices can be based on first‐order beliefs about the other agents' preferences and first‐order certainty about the other agents' rationality alone, and there is no need for agents to form higher‐order beliefs, because such beliefs are irrelevant to the optimal strategies. All dominant strategy mechanisms are strategically simple. But many more mechanisms are strategically simple. In particular, strategically simple mechanisms may be more flexible than dominant strategy mechanisms in the bilateral trade problem and the voting problem.
We study an online model of fair division designed to capture features of a real world charity problem. We consider two simple mechanisms for this model in which agents simply … We study an online model of fair division designed to capture features of a real world charity problem. We consider two simple mechanisms for this model in which agents simply declare what items they like. We analyse several axiomatic properties of these mechanisms like strategy-proofness and envy-freeness. Finally, we perform a competitive analysis and compute the price of anarchy.
Consider $n$ players having preferences over the connected pieces of a cake, identified with the interval $[0,1]$. A classical theorem due to Stromquist ensures under mild conditions that it is … Consider $n$ players having preferences over the connected pieces of a cake, identified with the interval $[0,1]$. A classical theorem due to Stromquist ensures under mild conditions that it is possible to divide the cake into $n$ connected pieces and assign these pieces to the players in an envy-free manner, i.e, no player strictly prefers a piece that has not been assigned to her. One of these conditions, considered as crucial, is that no player is happy with an empty piece. We prove that, even if this condition is not satisfied, it is still possible to get such a division when $n$ is a prime number or is equal to $4$. When $n$ is at most $3$, this has been previously proved by Erel Segal-Halevi, who conjectured that the result holds for any $n$. The main step in our proof is a new combinatorial lemma in topology, close to a conjecture by Segal-Halevi and which is reminiscent of the celebrated Sperner lemma: Instead of restricting the labels that can appear on each face of the simplex, the lemma considers labelings that enjoy a certain symmetry on the boundary.
We discuss some open problems concerning the maximal spread of coherent distributions. We prove a sharp bound on $\mathbb{E}|X-Y|^α$ for $(X,Y)$ coherent and $α\le 2$, and establish a novel connection … We discuss some open problems concerning the maximal spread of coherent distributions. We prove a sharp bound on $\mathbb{E}|X-Y|^α$ for $(X,Y)$ coherent and $α\le 2$, and establish a novel connection between coherent distributions and such combinatorial objects as bipartite graphs, conjugate partitions and Ferrer diagrams. Our results may turn out to be helpful not only for probabilists, but also for graph theorists, especially for those interested in mathematical chemistry and the study of topological indices.
We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are `goods' i.e., they yield positive utility for the … We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are `goods' i.e., they yield positive utility for the agents. There is also some work where the items are `chores' that yield negative utility for the agents. In this paper, we consider a more general scenario where an agent may have negative or positive utility for each item. This framework captures, e.g., fair task assignment, where agents can have both positive and negative utilities for each task. We show that whereas some of the positive axiomatic and computational results extend to this more general setting, others do not. We present several new and efficient algorithms for finding fair allocations in this general setting. We also point out several gaps in the literature regarding the existence of allocations satisfying certain fairness and efficiency properties and further study the complexity of computing such allocations.
A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all … A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of profiles), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility profile (before any manna) is Pareto dominated, the competitive profile is unique and still maximizes the product of utilities. If the zero profile is unfeasible, the competitive profiles are the critical points of the product of disutilities on the efficiency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We refine our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly benefits under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Efficient divisions of bads can be disconnected so that it will admit no continuous selection.
Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods [Foley'67, Varian'74]. Every agent receives an equal budget of artificial … Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods [Foley'67, Varian'74]. Every agent receives an equal budget of artificial currency with which to purchase goods, and prices match demand and supply. However, a CEEI is not guaranteed to exist when the goods are indivisible, even in the simple two-agent, single-item market. Yet, it is easy to see that once the two budgets are slightly perturbed (made generic), a competitive equilibrium does exist. In this paper we aim to extend this approach beyond the single-item case, and study the existence of equilibria in markets with two agents and additive preferences over multiple items. We show that for agents with equal budgets, making the budgets generic -- by adding vanishingly small random perturbations -- ensures the existence of an equilibrium. We further consider agents with arbitrary non-equal budgets, representing non-equal entitlements for goods. We show that competitive equilibrium guarantees a new notion of fairness among non-equal agents, and that it exists in cases of interest (like when the agents have identical preferences) if budgets are perturbed. Our results open opportunities for future research on generic equilibrium existence and fair treatment of non-equals.
(2002). Four-Person Envy-Free Chore Division. Mathematics Magazine: Vol. 75, No. 2, pp. 117-122. (2002). Four-Person Envy-Free Chore Division. Mathematics Magazine: Vol. 75, No. 2, pp. 117-122.
We provide the first polynomial time exact algorithm for computing an Arrow–Debreu market equilibrium for the case of linear utilities. Our algorithm is based on solving a convex program using … We provide the first polynomial time exact algorithm for computing an Arrow–Debreu market equilibrium for the case of linear utilities. Our algorithm is based on solving a convex program using the ellipsoid algorithm and simultaneous diophantine approximation. As a side result, we prove that the set of assignments at equilibrium is convex and the equilibrium prices themselves are log‐convex. Our convex program is explicit and intuitive, which allows maximizing a concave function over the set of equilibria. On the practical side, Ye developed an interior point algorithm [Lecture Notes in Comput. Sci. 3521, Springer, New York, 2005, pp. 3–5] to find an equilibrium based on our convex program. We also derive separate combinatorial characterizations of equilibrium for Arrow–Debreu and Fisher cases. Our convex program can be extended for many nonlinear utilities and production models. Our paper also makes a powerful theorem (Theorem 6.4.1 in [M. Grotschel, L. Lovasz, and A. Schrijver, Geometric Algorithms and Combinatorial Optimization, 2nd ed., Springer‐Verlag, Berlin, Heidelberg, 1993]) even more powerful (in Theorems 12 and 13) in the area of geometric algorithms and combinatorial optimization. The main idea in this generalization is to allow ellipsoids to contain not the whole convex region but a part of it. This theorem is of independent interest.
We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs … We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs form a metric. We assume that these costs are unknown to the algorithms (and possibly even to the agents themselves), which means we cannot simply select the optimal alternative, i.e. the alternative that minimizes the total agent cost (or median agent cost). However, we do assume that the agents know their ordinal preferences that are induced by the metric space. We examine randomized social choice functions that require only this ordinal information and select an alternative that is good in expectation with respect to the costs from the metric. To quantify how good a randomized social choice function is, we bound the distortion, which is the worst-case ratio between the expected cost of the alternative selected and the cost of the optimal alternative. We provide new distortion bounds for a variety of randomized algorithms, for both general metrics and for important special cases. Our results show a sizable improvement in distortion over deterministic algorithms.
We compare the Egalitarian Equivalent and the Competitive Equilibrium with Equal Incomes rules to divide a bundle of goods (heirlooms) or a bundle of bads (chores). For goods the Competitive … We compare the Egalitarian Equivalent and the Competitive Equilibrium with Equal Incomes rules to divide a bundle of goods (heirlooms) or a bundle of bads (chores). For goods the Competitive division fares better, as it is Resource Monotonic, and makes it harder to strategically misreport preferences. But for bads, the Competitive rule, unlike the Egalitarian one, is multivalued, harder to compute, and admits no continuous selection. We also provide an axiomatic characterization of the Competitive rule based on the simple formulation of Maskin Monotonicity under additive utilities.
We present a new model that describes the process of electing a group of representatives (e.g., a parliament) for a group of voters. In this model, called the voting committee … We present a new model that describes the process of electing a group of representatives (e.g., a parliament) for a group of voters. In this model, called the voting committee model, the elected group of representatives runs a number of ballots to make final decisions regarding various issues. The satisfaction of voters comes from the final decisions made by the elected committee. Our results suggest that depending on a decision system used by the committee to make these final decisions, different multi-winner election rules are most suitable for electing the committee. Furthermore, we show that if we allow not only a committee, but also an election rule used to make final decisions, to depend on the voters' preferences, we can obtain an even better representation of the voters.
By a modification of the method that was applied in (Korolev and Shevtsova, 2010), here the inequalities $Δ_n\leq0.3328(β_3+0.429)/\sqrt{n}$ and $Δ_n\leq0.33554(β_3+0.415)/\sqrt{n}$ are proved for the uniform distance $Δ_n$ between the standard … By a modification of the method that was applied in (Korolev and Shevtsova, 2010), here the inequalities $Δ_n\leq0.3328(β_3+0.429)/\sqrt{n}$ and $Δ_n\leq0.33554(β_3+0.415)/\sqrt{n}$ are proved for the uniform distance $Δ_n$ between the standard normal distribution function and the distribution function of the normalized sum of an arbitrary number $n\geq1$ of independent identically distributed random variables with zero mean, unit variance and finite third absolute moment $β_3$. The first of these two inequalities improves one that was proved in (Korolev and Shevtsova, 2010), and as well sharpens the best known upper estimate for the absolute constant $C_0$ in the classical Berry--Esseen inequality to be $C_0<0.4756$, since $0.3328(β_3+0.429)\leq0.3328\cdot1.429β_3<0.4756β_3$ by virtue of the condition $β_3\geq1$. The second of these inequalities is also a structural improvement of the classical Berry--Esseen inequality, and as well sharpens the upper estimate for $C_0$ still more to be $C_0<0.4748$.
We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a … We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1−ϵ fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor ϵ approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast ϵ approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain. Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1−ϵ fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary’s distribution, then we design an algorithm that achieves a 1−ϵ fraction of the weighted average of the optimal profit of each distribution the adversary picks. In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+ϵ) the mixed packing-covering problem with O (γ m log ( n /δ)/ϵ 2 ) oracle calls where the constraint matrix of this LP has dimension n × m , the success probability of the algorithm is 1−δ, and γ quantifies how significant a single request is when compared to the sum total of all requests. We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem.
We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, … We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of 1- 1/e ¿ 0.632, a very familiar bound that holds for many online problems; further, the bound is tight in this case. In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio, but still, no algorithm is known that beats the 1 - 1/e bound. Our main result is a 0.67-approximation online algorithm for stochastic bipartite matching, breaking this 1 - ¿ barrier. Furthermore, we show that no online algorithm can produce a 1 - ¿ approximation for an arbitrarily small e for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-)matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario. This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application.
There is a heterogeneous resource that contains both good parts and bad parts, for example, a cake with some parts burnt, a land-estate with some parts heavily taxed, or a … There is a heterogeneous resource that contains both good parts and bad parts, for example, a cake with some parts burnt, a land-estate with some parts heavily taxed, or a chore with some parts fun to do. The resource has to be divided fairly among $n$ agents with different preferences, each of whom has a personal value-density function on the resource. The value-density functions can accept any real value --- positive, negative or zero. Each agent should receive a connected piece and no agent should envy another agent. We prove that such a division exists for 3 agents and present preliminary positive results for larger numbers of agents.
Expander graphs are highly connected sparse finite graphs. They play an important role in computer science as basic building blocks for network constructions, error correcting codes, algorithms, and more. In … Expander graphs are highly connected sparse finite graphs. They play an important role in computer science as basic building blocks for network constructions, error correcting codes, algorithms, and more. In recent years they have started to play an increasing role also in pure mathematics: number theory, group theory, geometry, and more. This expository article describes their constructions and various applications in pure and applied mathematics.