Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (31)

Most reinforcement learning algorithms with regret guarantees rely on a critical assumption: that all errors are recoverable. Recent work by Plaut et al. discarded this assumption and presented algorithms that … Most reinforcement learning algorithms with regret guarantees rely on a critical assumption: that all errors are recoverable. Recent work by Plaut et al. discarded this assumption and presented algorithms that avoid "catastrophe" (i.e., irreparable errors) by asking for help. However, they provided only safety guarantees and did not consider reward maximization. We prove that any algorithm that avoids catastrophe in their setting also guarantees high reward (i.e., sublinear regret) in any Markov Decision Process (MDP), including MDPs with irreversible costs. This constitutes the first no-regret guarantee for general MDPs. More broadly, our result may be the first formal proof that it is possible for an agent to obtain high reward while becoming self-sufficient in an unknown, unbounded, and high-stakes environment without causing catastrophe or requiring resets.
When deployed in dynamic environments, AI agents will inevitably encounter challenges that exceed their individual capabilities. Leveraging assistance from expert agents-whether human or AI-can significantly enhance safety and performance in … When deployed in dynamic environments, AI agents will inevitably encounter challenges that exceed their individual capabilities. Leveraging assistance from expert agents-whether human or AI-can significantly enhance safety and performance in such situations. However, querying experts is often costly, necessitating the development of agents that can efficiently request and utilize expert guidance. In this paper, we introduce a fundamental coordination problem called Learning to Yield and Request Control (YRC), where the objective is to learn a strategy that determines when to act autonomously and when to seek expert assistance. We consider a challenging practical setting in which an agent does not interact with experts during training but must adapt to novel environmental changes and expert interventions at test time. To facilitate empirical research, we introduce YRC-Bench, an open-source benchmark featuring diverse domains. YRC-Bench provides a standardized Gym-like API, simulated experts, evaluation pipeline, and implementation of competitive baselines. Towards tackling the YRC problem, we propose a novel validation approach and investigate the performance of various learning methods across diverse environments, yielding insights that can guide future research.
While reinforcement learning (RL) agents often perform well during training, they can struggle with distribution shift in real-world deployments. One particularly severe risk of distribution shift is goal misgeneralization, where … While reinforcement learning (RL) agents often perform well during training, they can struggle with distribution shift in real-world deployments. One particularly severe risk of distribution shift is goal misgeneralization, where the agent learns a proxy goal that coincides with the true goal during training but not during deployment. In this paper, we explore whether allowing an agent to ask for help from a supervisor in unfamiliar situations can mitigate this issue. We focus on agents trained with PPO in the CoinRun environment, a setting known to exhibit goal misgeneralization. We evaluate multiple methods for determining when the agent should request help and find that asking for help consistently improves performance. However, we also find that methods based on the agent's internal state fail to proactively request help, instead waiting until mistakes have already occurred. Further investigation suggests that the agent's internal state does not represent the coin at all, highlighting the importance of learning nuanced representations, the risks of ignoring everything not immediately relevant to reward, and the necessity of developing ask-for-help strategies tailored to the agent's training algorithm.
Although large language models (LLMs) perform impressively on many tasks, overconfidence remains a problem. We hypothesized that on multiple-choice Q&A tasks, wrong answers would be associated with smaller maximum softmax … Although large language models (LLMs) perform impressively on many tasks, overconfidence remains a problem. We hypothesized that on multiple-choice Q&A tasks, wrong answers would be associated with smaller maximum softmax probabilities (MSPs) compared to correct answers. We comprehensively evaluate this hypothesis on ten open-source LLMs and five datasets, and find strong evidence for our hypothesis among models which perform well on the original Q&A task. For the six LLMs with the best Q&A performance, the AUROC derived from the MSP was better than random chance with p < 10^{-4} in 59/60 instances. Among those six LLMs, the average AUROC ranged from 60% to 69%. Leveraging these findings, we propose a multiple-choice Q&A task with an option to abstain and show that performance can be improved by selectively abstaining based on the MSP of the initial model response. We also run the same experiments with pre-softmax logits instead of softmax probabilities and find similar (but not identical) results.
Most reinforcement learning algorithms with formal regret guarantees assume all mistakes are reversible and rely on essentially trying all possible options. This approach leads to poor outcomes when some mistakes … Most reinforcement learning algorithms with formal regret guarantees assume all mistakes are reversible and rely on essentially trying all possible options. This approach leads to poor outcomes when some mistakes are irreparable or even catastrophic. We propose a variant of the contextual bandit problem where the goal is to minimize the chance of catastrophe. Specifically, we assume that the payoff each round represents the chance of avoiding catastrophe that round, and try to maximize the product of payoffs (the overall chance of avoiding catastrophe). To give the agent some chance of success, we allow a limited number of queries to a mentor and assume a Lipschitz continuous payoff function. We present an algorithm whose regret and rate of querying the mentor both approach 0 as the time horizon grows, assuming a continuous 1D state space and a relatively "simple" payoff function. We also provide a matching lower bound: without the simplicity assumption: any algorithm either constantly asks for help or is nearly guaranteed to cause catastrophe. Finally, we identify the key obstacle to generalizing our algorithm to a multi-dimensional state space.
A two-sided market consists of two sets of agents, each of whom have preferences over the other (Airbnb, Upwork, Lyft, Uber, etc.). We propose and analyze a repeated matching problem, … A two-sided market consists of two sets of agents, each of whom have preferences over the other (Airbnb, Upwork, Lyft, Uber, etc.). We propose and analyze a repeated matching problem, where some set of matches occur on each time step, and our goal is to ensure fairness with respect to the cumulative allocations over an infinite time horizon. Our main result is a polynomial-time algorithm for additive, symmetric (v_i(j) = v_j(i)), and binary (v_i(j) \in \{a,1\}) valuations that both (1) guarantees up to a single match (EF1) and (2) selects a maximum weight matching on each time step. Thus for this class of fairness can be achieved without sacrificing economic efficiency. This result holds even for valuations, i.e., valuations that change over time. Although symmetry is a strong assumption, we show that this result cannot be extended to asymmetric binary valuations: (1) and (2) together are impossible even when valuations do not change over time, and for dynamic even (1) alone is impossible. To our knowledge, this is the first analysis of envy-freeness in a repeated matching setting.
We study market mechanisms for allocating divisible goods to competing agents with quasilinear utilities. For \emph{linear} pricing (i.e., the cost of a good is proportional to the quantity purchased), the … We study market mechanisms for allocating divisible goods to competing agents with quasilinear utilities. For \emph{linear} pricing (i.e., the cost of a good is proportional to the quantity purchased), the First Welfare Theorem states that Walrasian equilibria maximize the sum of agent valuations. This ensures efficiency, but can lead to extreme inequality across individuals. Many real-world markets -- especially for water -- use \emph{convex} pricing instead, often known as increasing block tariffs (IBTs). IBTs are thought to promote equality, but there is a dearth of theoretical support for this claim. In this paper, we study a simple convex pricing rule and show that the resulting equilibria are guaranteed to maximize a CES welfare function. Furthermore, a parameter of the pricing rule directly determines which CES welfare function is implemented; by tweaking this parameter, the social planner can precisely control the tradeoff between equality and efficiency. Our result holds for any valuations that are homogeneous, differentiable, and concave. We also give an iterative algorithm for computing these pricing rules, derive a truthful mechanism for the case of a single good, and discuss Sybil attacks.
The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
We initiate the study of the communication complexity of fair division with indivisible goods. We focus on some of the most well studied fairness notions (envy-freeness, proportionality, and approximations thereof) … We initiate the study of the communication complexity of fair division with indivisible goods. We focus on some of the most well studied fairness notions (envy-freeness, proportionality, and approximations thereof) and valuation classes (submodular, subadditive, and unrestricted). We show that for more than two players (and any combination of other parameters), determining whether a fair allocation exists requires exponential communication (in the number of goods). For two players, tractability depends heavily on the specific combination of parameters, and most of the paper is focused on the two-player setting. Taken together, our results completely resolve whether the communication complexity of computing a fair allocation (or determining that none exists) is polynomial or exponential, for every combination of fairness notion, valuation class, and number of players, for both deterministic and randomized protocols.
We study market mechanisms for allocating divisible goods to competing agents with quasilinear utilities. For \emph{linear} pricing (i.e., the cost of a good is proportional to the quantity purchased), the … We study market mechanisms for allocating divisible goods to competing agents with quasilinear utilities. For \emph{linear} pricing (i.e., the cost of a good is proportional to the quantity purchased), the First Welfare Theorem states that Walrasian equilibria maximize the sum of agent valuations. This ensures efficiency, but can lead to extreme inequality across individuals. Many real-world markets -- especially for water -- use \emph{convex} pricing instead, often known as increasing block tariffs (IBTs). IBTs are thought to promote equality, but there is a dearth of theoretical support for this claim. In this paper, we study a simple convex pricing rule and show that the resulting equilibria are guaranteed to maximize a CES welfare function. Furthermore, a parameter of the pricing rule directly determines which CES welfare function is implemented; by tweaking this parameter, the social planner can precisely control the tradeoff between equality and efficiency. Our result holds for any valuations that are homogeneous, differentiable, and concave. We also give an iterative algorithm for computing these pricing rules, derive a truthful mechanism for the case of a single good, and discuss Sybil attacks.
A two-sided market consists of two sets of agents, each of whom have preferences over the other (Airbnb, Upwork, Lyft, Uber, etc.). We propose and analyze a repeated matching problem, … A two-sided market consists of two sets of agents, each of whom have preferences over the other (Airbnb, Upwork, Lyft, Uber, etc.). We propose and analyze a repeated matching problem, where some set of matches occur on each time step, and our goal is to ensure fairness with respect to the cumulative allocations over an infinite time horizon. Our main result is a polynomial-time algorithm for additive, symmetric (v_i(j) = v_j(i)), and binary (v_i(j) \in \{a,1\}) valuations that both (1) guarantees "envy-freeness up to a single match" (EF1) and (2) selects a maximum weight matching on each time step. Thus for this class of valuations, fairness can be achieved without sacrificing economic efficiency. This result holds even for "dynamic valuations", i.e., valuations that change over time. Although symmetry is a strong assumption, we show that this result cannot be extended to asymmetric binary valuations: (1) and (2) together are impossible even when valuations do not change over time, and for dynamic valuations, even (1) alone is impossible. To our knowledge, this is the first analysis of envy-freeness in a repeated matching setting.
In bandwidth allocation, competing agents wish to transmit data along paths of links in a network, and each agent's utility is equal to the minimum bandwidth she receives among all … In bandwidth allocation, competing agents wish to transmit data along paths of links in a network, and each agent's utility is equal to the minimum bandwidth she receives among all links in her desired path. Recent market mechanisms for this problem have either focused on only Nash welfare, or ignored strategic behavior. We propose a nonlinear variant of the classic trading post mechanism, and show that for almost the entire family of CES welfare functions (which includes maxmin welfare, Nash welfare, and utilitarian welfare), every Nash equilibrium of our mechanism is optimal. We also prove that fully strategyproof mechanisms for this problem are impossible in general, with the exception of maxmin welfare. More broadly, our work shows that even small modifications (such as allowing nonlinear constraints) can dramatically increase the power of market mechanisms like trading post.
We initiate the study of the communication complexity of fair division with indivisible goods. We focus on some of the most well-studied fairness notions (envy-freeness, proportionality, and approximations thereof) and … We initiate the study of the communication complexity of fair division with indivisible goods. We focus on some of the most well-studied fairness notions (envy-freeness, proportionality, and approximations thereof) and valuation classes (submodular, subadditive and unrestricted). Within these parameters, our results completely resolve whether the communication complexity of computing a fair allocation (or determining that none exist) is polynomial or exponential (in the number of goods), for every combination of fairness notion, valuation class, and number of players, for both deterministic and randomized protocols.
In bandwidth allocation, competing agents wish to transmit data along paths of links in a network, and each agent's utility is equal to the minimum bandwidth she receives among all … In bandwidth allocation, competing agents wish to transmit data along paths of links in a network, and each agent's utility is equal to the minimum bandwidth she receives among all links in her desired path. Recent market mechanisms for this problem have either focused on only Nash welfare, or ignored strategic behavior. We propose a nonlinear variant of the classic trading post mechanism, and show that for almost the entire family of CES welfare functions (which includes maxmin welfare, Nash welfare, and utilitarian welfare), every Nash equilibrium of our mechanism is optimal. We also prove that fully strategyproof mechanisms for this problem are impossible in general, with the exception of maxmin welfare. More broadly, our work shows that even small modifications (such as allowing nonlinear constraints) can dramatically increase the power of market mechanisms like trading post.
A public decision-making problem consists of a set of issues, each with multiple possible alternatives, and a set of competing agents, each with a preferred alternative for each issue. We … A public decision-making problem consists of a set of issues, each with multiple possible alternatives, and a set of competing agents, each with a preferred alternative for each issue. We study adaptations of market economies to this setting, focusing on binary issues. Issues have prices, and each agent is endowed with artificial currency that she can use to purchase probability for her preferred alternatives (we allow randomized outcomes). We first show that when each issue has a single price that is common to all agents, market equilibria can be arbitrarily bad. This negative result motivates a different approach. We present a novel technique called issue expansion, which transforms any public decision-making instance into an equivalent Fisher market, the simplest type of private goods market. This is done by expanding each issue into many goods: one for each pair of agents who disagree on that issue. We show that the prices in the constructed Fisher market yield a pricing equilibrium in the original public decision-making problem which maximizes Nash welfare. More broadly, pairwise issue expansion uncovers a powerful connection between the public decision-making and private goods settings; this immediately yields several interesting results about public decisions markets, and furthers the hope that we will be able to find a simple iterative voting protocol that leads to near-optimum decisions.
We study the allocation of divisible goods to competing agents via a market mechanism, focusing on agents with Leontief utilities. The majority of the economics and mechanism design literature has … We study the allocation of divisible goods to competing agents via a market mechanism, focusing on agents with Leontief utilities. The majority of the economics and mechanism design literature has focused on \emph{linear} prices, meaning that the cost of a good is proportional to the quantity purchased. Equilibria for linear prices are known to be exactly the maximum Nash welfare allocations. \emph{Price curves} allow the cost of a good to be any (increasing) function of the quantity purchased. We show that price curve equilibria are not limited to maximum Nash welfare allocations with two main results. First, we show that an allocation can be supported by strictly increasing price curves if and only if it is \emph{group-domination-free}. A similarly characterization holds for weakly increasing price curves. We use this to show that given any allocation, we can compute strictly (or weakly) increasing price curves that support it (or show that none exist) in polynomial time. These results involve a connection to the \emph{agent-order matrix} of an allocation, which may have other applications. Second, we use duality to show that in the bandwidth allocation setting, any allocation maximizing a CES welfare function can be supported by price curves.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
A public decision-making problem consists of a set of issues, each with multiple possible alternatives, and a set of competing agents, each with a preferred alternative for each issue. We … A public decision-making problem consists of a set of issues, each with multiple possible alternatives, and a set of competing agents, each with a preferred alternative for each issue. We study adaptations of market economies to this setting, focusing on binary issues. Issues have prices, and each agent is endowed with artificial currency that she can use to purchase probability for her preferred alternatives (we allow randomized outcomes). We first show that when each issue has a single price that is common to all agents, market equilibria can be arbitrarily bad. This negative result motivates a different approach. We present a novel technique called "pairwise issue expansion", which transforms any public decision-making instance into an equivalent Fisher market, the simplest type of private goods market. This is done by expanding each issue into many goods: one for each pair of agents who disagree on that issue. We show that the equilibrium prices in the constructed Fisher market yield a "pairwise pricing equilibrium" in the original public decision-making problem which maximizes Nash welfare. More broadly, pairwise issue expansion uncovers a powerful connection between the public decision-making and private goods settings; this immediately yields several interesting results about public decisions markets, and furthers the hope that we will be able to find a simple iterative voting protocol that leads to near-optimum decisions.
We study the allocation of divisible goods to competing agents via a market mechanism, focusing on agents with Leontief utilities. The majority of the economics and mechanism design literature has … We study the allocation of divisible goods to competing agents via a market mechanism, focusing on agents with Leontief utilities. The majority of the economics and mechanism design literature has focused on \emph{linear} prices, meaning that the cost of a good is proportional to the quantity purchased. Equilibria for linear prices are known to be exactly the maximum Nash welfare allocations. \emph{Price curves} allow the cost of a good to be any (increasing) function of the quantity purchased. We show that price curve equilibria are not limited to maximum Nash welfare allocations with two main results. First, we show that an allocation can be supported by strictly increasing price curves if and only if it is \emph{group-domination-free}. A similarly characterization holds for weakly increasing price curves. We use this to show that given any allocation, we can compute strictly (or weakly) increasing price curves that support it (or show that none exist) in polynomial time. These results involve a connection to the \emph{agent-order matrix} of an allocation, which may have other applications. Second, we use duality to show that in the bandwidth allocation setting, any allocation maximizing a CES welfare function can be supported by price curves.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
The goal of fair division is to distribute resources among competing players in a "fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a "fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically … A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically hard. Traditionally, exchanges took place in cycles, with each participating patient-donor pair both giving and receiving a kidney. The recent introduction of chains, where a donor without a paired patient triggers a sequence of donations without requiring a kidney in return, increased the efficacy of fielded kidney exchanges---while also dramatically raising the empirical computational hardness of clearing the market in practice. While chains can be quite long, unbounded-length chains are not desirable: planned donations can fail before transplant for a variety of reasons, and the failure of a single donation causes the rest of that chain to fail, so parallel shorter chains are better in practice.
A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically … A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically hard. Traditionally, exchanges took place in cycles, with each participating patient-donor pair both giving and receiving a kidney. The recent introduction of chains, where a donor without a paired patient triggers a sequence of donations without requiring a kidney in return, increased the efficacy of fielded kidney exchanges---while also dramatically raising the empirical computational hardness of clearing the market in practice. While chains can be quite long, unbounded-length chains are not desirable: planned donations can fail before transplant for a variety of reasons, and the failure of a single donation causes the rest of that chain to fail, so parallel shorter chains are better in practice. In this paper, we address the tractable clearing of kidney exchanges with short cycles and chains that are long but bounded. This corresponds to the practice at most modern fielded kidney exchanges. We introduce three new integer programming formulations, two of which are compact. Furthermore, one of these models has a linear programming relaxation that is exactly as tight as the previous tightest formulation (which was not compact) for instances in which each donor has a paired patient. On real data from the UNOS nationwide exchange in the United States and the NLDKSS nationwide exchange in the United Kingdom, as well as on generated realistic large-scale data, we show that our new models are competitive with all existing solvers---in many cases outperforming all other solvers by orders of magnitude.
Kidney exchange is a barter market where patients trade willing but medically incompatible donors. These trades occur via cycles, where each patient-donor pair both gives and receives a kidney, and … Kidney exchange is a barter market where patients trade willing but medically incompatible donors. These trades occur via cycles, where each patient-donor pair both gives and receives a kidney, and via chains, which begin with an altruistic donor who does not require a kidney in return. For logistical reasons, the maximum length of a cycle is typically limited to a small constant, while chains can be much longer. Given a compatibility graph of patient-donor pairs, altruists, and feasible potential transplants between them, finding even a maximum-cardinality set of vertex-disjoint cycles and chains is NP-hard. There has been much work on developing provably optimal solvers that are efficient in practice. One of the leading techniques has been branch and price, where column generation is used to incrementally bring cycles and chains into the optimization model on an as-needed basis. In particular, only positive-price columns need to be brought into the model. We prove that finding a positive-price chain is NP-complete. This shows incorrectness of two leading branch-and-price solvers that suggested polynomial-time chain pricing algorithms.
Kidney exchange is a barter market where patients trade willing but medically incompatible donors. These trades occur via cycles, where each patient-donor pair both gives and receives a kidney, and … Kidney exchange is a barter market where patients trade willing but medically incompatible donors. These trades occur via cycles, where each patient-donor pair both gives and receives a kidney, and via chains, which begin with an altruistic donor who does not require a kidney in return. For logistical reasons, the maximum length of a cycle is typically limited to a small constant, while chains can be much longer. Given a compatibility graph of patient-donor pairs, altruists, and feasible potential transplants between them, finding even a maximum-cardinality set of vertex-disjoint cycles and chains is NP-hard. There has been much work on developing provably optimal solvers that are efficient in practice. One of the leading techniques has been branch and price, where column generation is used to incrementally bring cycles and chains into the optimization model on an as-needed basis. In particular, only positive-price columns need to be brought into the model. We prove that finding a positive-price chain is NP-complete. This shows incorrectness of two leading branch-and-price solvers that suggested polynomial-time chain pricing algorithms.
A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically … A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically hard. Traditionally, exchanges took place in cycles, with each participating patient-donor pair both giving and receiving a kidney. The recent introduction of chains, where a donor without a paired patient triggers a sequence of donations without requiring a kidney in return, increased the efficacy of fielded kidney exchanges---while also dramatically raising the empirical computational hardness of clearing the market in practice. While chains can be quite long, unbounded-length chains are not desirable: planned donations can fail before transplant for a variety of reasons, and the failure of a single donation causes the rest of that chain to fail, so parallel shorter chains are better in practice. In this paper, we address the tractable clearing of kidney exchanges with short cycles and chains that are long but bounded. This corresponds to the practice at most modern fielded kidney exchanges. We introduce three new integer programming formulations, two of which are compact. Furthermore, one of these models has a linear programming relaxation that is exactly as tight as the previous tightest formulation (which was not compact) for instances in which each donor has a paired patient. On real data from the UNOS nationwide exchange in the United States and the NLDKSS nationwide exchange in the United Kingdom, as well as on generated realistic large-scale data, we show that our new models are competitive with all existing solvers---in many cases outperforming all other solvers by orders of magnitude.

Commonly Cited References

Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted … Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)
We investigate the problem of fair recommendation in the context of two-sided online platforms, comprising customers on one side and producers on the other. Traditionally, recommendation services in these platforms … We investigate the problem of fair recommendation in the context of two-sided online platforms, comprising customers on one side and producers on the other. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reveals that such customer-centric design may lead to unfair distribution of exposure among the producers, which may adversely impact their well-being. On the other hand, a producer-centric design might become unfair to the customers. Thus, we consider fairness issues that span both customers and producers. Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods. Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer. Extensive evaluations over multiple real-world datasets show the effectiveness of FairRec in ensuring two-sided fairness while incurring a marginal loss in the overall recommendation quality.
The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the … The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether there exist mechanisms that can implement fair outcomes, despite the agents' strategic behavior. A fundamental objective function used for measuring the fairness of an allocation is the geometric mean of the agents' values, known as the Nash social welfare (NSW). This objective function is maximized by widely known solution concepts such as Nash bargaining and the competitive equilibrium with equal incomes.
Analyzing simple and natural price-adjustment processes that converge to a market equilibrium is a fundamental question in economics. Such an analysis may have implications in economic theory, computational economics, and … Analyzing simple and natural price-adjustment processes that converge to a market equilibrium is a fundamental question in economics. Such an analysis may have implications in economic theory, computational economics, and distributed systems. Tâtonnement, proposed by Walras in 1874, is a process by which prices go up in response to excess demand, and down in response to excess supply. This paper analyzes the convergence of a time-discrete tâtonnement process, a problem that recently attracted considerable attention of computer scientists. We prove that the simple tâtonnement process that we consider converges (efficiently) to equilibrium prices and allocation in markets with nested CES-Leontief utilities, generalizing some of the previous convergence proofs for more restricted types of utility functions.
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, … We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
The stochastic matching problem deals with finding a maximum matching in a graph whose edges are unknown but can be accessed via queries. This is a special case of stochastic … The stochastic matching problem deals with finding a maximum matching in a graph whose edges are unknown but can be accessed via queries. This is a special case of stochastic k-set packing, where the problem is to find a maximum packing of sets, each of which exists with some probability. In this paper, we provide edge and set query algorithms for these two problems, respectively, that provably achieve some fraction of the omniscient optimal solution. Our main theoretical result for the stochastic matching (i.e., 2-set packing) problem is the design of an adaptive algorithm that queries only a constant number of edges per vertex and achieves a (1-e) fraction of the omniscient optimal solution, for an arbitrarily small e > 0. Moreover, this adaptive algorithm performs the queries in only a constant number of rounds. We complement this result with a non-adaptive (i.e., one round of queries) algorithm that achieves a (0.5 - e) fraction of the omniscient optimum. We also extend both our results to stochastic k-set packing by designing an adaptive algorithm that achieves a (2/k - e) fraction of the omniscient optimal solution, again with only O(1) queries per element. This guarantee is close to the best known polynomial-time approximation ratio of 3/k+1 -e for the deterministic k-set packing problem [Furer 2013]. We empirically explore the application of (adaptations of) these algorithms to the kidney exchange problem, where patients with end-stage renal failure swap willing but incompatible donors. We show on both generated data and on real data from the first 169 match runs of the UNOS nationwide kidney exchange that even a very small number of non-adaptive edge queries per vertex results in large gains in expected successful matches.
In bandwidth allocation, competing agents wish to transmit data along paths of links in a network, and each agent's utility is equal to the minimum bandwidth she receives among all … In bandwidth allocation, competing agents wish to transmit data along paths of links in a network, and each agent's utility is equal to the minimum bandwidth she receives among all links in her desired path. Recent market mechanisms for this problem have either focused on only Nash welfare, or ignored strategic behavior. We propose a nonlinear variant of the classic trading post mechanism, and show that for almost the entire family of CES welfare functions (which includes maxmin welfare, Nash welfare, and utilitarian welfare), every Nash equilibrium of our mechanism is optimal. We also prove that fully strategyproof mechanisms for this problem are impossible in general, with the exception of maxmin welfare. More broadly, our work shows that even small modifications (such as allowing nonlinear constraints) can dramatically increase the power of market mechanisms like trading post.
The need for kidney exchange arises when a healthy person wishes to donate a kidney but is incompatible with her intended recipient. Two main factors determine compatibility of a donor … The need for kidney exchange arises when a healthy person wishes to donate a kidney but is incompatible with her intended recipient. Two main factors determine compatibility of a donor with a patient: blood-type compatibility and tissue-type compatibility. Two or more incompatible pairs can form a cyclic exchange so that each patient can receive a kidney from a compatible donor. In addition, an exchange can be initiated by a non-directed donor (an altruistic donor who does not designate a particular intended patient), and in this case, a chain of exchanges need not form a closed cycle.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always … The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution … We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating the preferences of … In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating the preferences of community members to determine an allocation of funds to projects. This problem is different from standard fair resource allocation because of public goods: The allocated goods benefit all users simultaneously. Fairness is crucial in participatory decision making, since generating equitable outcomes is an important goal of these processes. We argue that the classic game theoretic notion of core captures fairness in the setting. To compute the core, we first develop a novel characterization of a public goods market equilibrium called the Lindahl equilibrium, which is always a core solution. We then provide the first (to our knowledge) polynomial time algorithm for computing such an equilibrium for a broad set of utility functions; our algorithm also generalizes (in a non-trivial way) the well-known concept of proportional fairness. We use our theoretical insights to perform experiments on real participatory budgeting voting data. We empirically show that the core can be efficiently computed for utility functions that naturally model our practical setting, and examine the relation of the core with the familiar welfare objective. Finally, we address concerns of incentives and mechanism design by developing a randomized approximately dominant-strategy truthful mechanism building on the exponential mechanism from differential privacy.
We introduce a simple benchmark model of dynamic matching in networked markets, where agents arrive and depart stochastically and the network of acceptable transactions among agents forms a random graph. … We introduce a simple benchmark model of dynamic matching in networked markets, where agents arrive and depart stochastically and the network of acceptable transactions among agents forms a random graph. We analyze our model from three perspectives: waiting, optimization, and information. The main insight of our analysis is that waiting to thicken the market can be substantially more important than increasing the speed of transactions, and this is quite robust to the presence of waiting costs. From an optimization perspective, naive local algorithms, that choose the right time to match agents but do not exploit global network structure, can perform very close to optimal algorithms. From an information perspective, algorithms that employ even partial information on agents' departure times perform substantially better than those that lack such information. To elicit agents' departure times, we design an incentive-compatible continuous-time dynamic mechanism without transfers.
We survey a burgeoning and promising new research area that considers the online nature of many practical fair division problems. We identify wide variety of such online fair division problems, … We survey a burgeoning and promising new research area that considers the online nature of many practical fair division problems. We identify wide variety of such online fair division problems, as well as discuss new mechanisms and normative properties that apply to this online setting. The online nature of such fair division problems provides both opportunities and challenges such as the possibility to develop new online mechanisms as well as the difficulty of dealing with an uncertain future.
Abstract Despite the potential of ride-hailing services to democratize the labor market, they are often accused of fostering unfair working conditions and low wages. This paper investigates the effect of … Abstract Despite the potential of ride-hailing services to democratize the labor market, they are often accused of fostering unfair working conditions and low wages. This paper investigates the effect of algorithm design decisions on wage inequality in ride-hailing platforms. We create a simplified city environment where taxis serve passengers to emulate a working week in a worker’s life. Our simulation approach overcomes the difficulties stemming from both the complexity of transportation systems and the lack of data and algorithmic transparency. We calibrate the model based on empirical data, including conditions about locations of drivers and passengers, traffic, the layout of the city, and the algorithm that matches requests with drivers. Our results show that small changes in the system parameters can cause large deviations in the income distributions of drivers, leading to an unpredictable system that often distributes vastly different incomes to identically performing drivers. As suggested by recent studies about feedback loops in algorithmic systems, these short-term income differences may result in enforced and long-term wage gaps.
A group of N individuals must choose between two collective alternatives. Under Quadratic Voting (QV), agents buy votes in favor of their preferred alternative from a clearing house, paying the … A group of N individuals must choose between two collective alternatives. Under Quadratic Voting (QV), agents buy votes in favor of their preferred alternative from a clearing house, paying the square of the number of votes purchased; the sum of all votes purchased determines the outcome. We provide the first rigorous results for this mechanism, in a canonical independent private values environment with bounded value distributions. In addition to characterizing the nature of equilibria, we demonstrate that for all bounded value distributions, the utilitarian welfare losses of the mechanism as a proportion of the maximum possible welfare tends to zero as the population sizebecomes large.
The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
The stochastic matching problem deals with finding a maximum matching in a graph whose edges are unknown but can be accessed via queries. This is a special case of stochastic … The stochastic matching problem deals with finding a maximum matching in a graph whose edges are unknown but can be accessed via queries. This is a special case of stochastic $k$-set packing, where the problem is to find a maximum packing of sets, each of which exists with some probability. In this paper, we provide edge and set query algorithms for these two problems, respectively, that provably achieve some fraction of the omniscient optimal solution. Our main theoretical result for the stochastic matching (i.e., $2$-set packing) problem is the design of an \emph{adaptive} algorithm that queries only a constant number of edges per vertex and achieves a $(1-\epsilon)$ fraction of the omniscient optimal solution, for an arbitrarily small $\epsilon>0$. Moreover, this adaptive algorithm performs the queries in only a constant number of rounds. We complement this result with a \emph{non-adaptive} (i.e., one round of queries) algorithm that achieves a $(0.5 - \epsilon)$ fraction of the omniscient optimum. We also extend both our results to stochastic $k$-set packing by designing an adaptive algorithm that achieves a $(\frac{2}{k} - \epsilon)$ fraction of the omniscient optimal solution, again with only $O(1)$ queries per element. This guarantee is close to the best known polynomial-time approximation ratio of $\frac{3}{k+1} -\epsilon$ for the \emph{deterministic} $k$-set packing problem [Furer and Yu, 2013] We empirically explore the application of (adaptations of) these algorithms to the kidney exchange problem, where patients with end-stage renal failure swap willing but incompatible donors. We show on both generated data and on real data from the first 169 match runs of the UNOS nationwide kidney exchange that even a very small number of non-adaptive edge queries per vertex results in large gains in expected successful matches.
In this paper we shall study the concept of cardinal utility in three different situations (stochastic objects of choice, stochastic act of choice; independent factors of the action set) by … In this paper we shall study the concept of cardinal utility in three different situations (stochastic objects of choice, stochastic act of choice; independent factors of the action set) by means of the same mathematical result that gives a topological characterization of three families of parallel straight lines in a plane. This result, proved first by G. Thomsen [24] under differentiability assumptions, and later by W. Blaschke [2] in its present general form (see also W. Blaschke and G. Bol [3]), can be briefly described as follows. Consider the topological image G of a two-dimensional convex set and three families of curves in that set such that (a) exactly one curve of each family goes through a point of G, and (b) two curves of different families have at most one common point. Is there a topological transformation carrying these three families of curves into three families of parallel straight lines? If the answer is affirmative, the hexagonal configuration of Figure l(a) is observed. Let P be an arbitrary point of G, draw through it a curve of each family, and take an arbitrary point A on one of these curves; by drawing through A the curves of the other two families, we may obtain B and B' and from them C and C'.
In this paper we offer some inequalities involving multivariate convex functions. Among other things a refinement of classical Jensen’s inequality as well as an extension of Fejér’s inequality to the … In this paper we offer some inequalities involving multivariate convex functions. Among other things a refinement of classical Jensen’s inequality as well as an extension of Fejér’s inequality to the case of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="s"> <mml:semantics> <mml:mi>s</mml:mi> <mml:annotation encoding="application/x-tex">s</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-variate <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-parenthesis s greater-than-or-equal-to 1 right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>≥<!-- ≥ --></mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">(s \geq 1)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> functions are included. These results are obtained with the aid of the generalized simplex splines.
We introduce a simple benchmark model of dynamic matching in networked markets, where agents arrive and depart stochastically and the network of acceptable transactions between agents forms a random graph. … We introduce a simple benchmark model of dynamic matching in networked markets, where agents arrive and depart stochastically and the network of acceptable transactions between agents forms a random graph. We analyze our model from three perspectives: waiting time, optimization, and information. The main insight of our analysis is that waiting to thicken the market can be substantially more important than increasing the speed of transactions, and this is quite robust to the presence of waiting costs. From an optimization perspective, naive local algorithms, that choose the right time to match agents but do not exploit global network structure, can perform very close to optimal algorithms. From an information perspective, algorithms that employ even partial information on agents' departure times perform substantially better than those that lack such information. Information and waiting are complements; information about departure times is necessary for waiting to yield large gains. To elicit agents' departure times, we design an incentive-compatible continuous-time dynamic mechanism without transfers. LINK: www.ssrn.com/abstract=2394319
This classic text and reference introduces probability theory for both advanced undergraduate students of statistics and scientists in related fields, drawing on real applications in the physical and biological sciences. … This classic text and reference introduces probability theory for both advanced undergraduate students of statistics and scientists in related fields, drawing on real applications in the physical and biological sciences. The book makes probability exciting. -Journal of the American Statistical Association
The need for kidney exchange arises when a healthy person wishes to donate a kidney but is incompatible with her intended recipient. Two main factors determine compatibility of a donor … The need for kidney exchange arises when a healthy person wishes to donate a kidney but is incompatible with her intended recipient. Two main factors determine compatibility of a donor with a patient: blood-type compatibility and tissue-type compatibility. Two or more incompatible pairs can form a cyclic exchange so that each patient can receive a kidney from a compatible donor. In addition, an exchange can be initiated by a non-directed donor (an altruistic donor who does not designate a particular intended patient), and in this case, a chain of exchanges need not form a closed cycle.
A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically … A kidney exchange is an organized barter market where patients in need of a kidney swap willing but incompatible donors. Determining an optimal set of exchanges is theoretically and empirically hard. Traditionally, exchanges took place in cycles, with each participating patient-donor pair both giving and receiving a kidney. The recent introduction of chains, where a donor without a paired patient triggers a sequence of donations without requiring a kidney in return, increased the efficacy of fielded kidney exchanges---while also dramatically raising the empirical computational hardness of clearing the market in practice. While chains can be quite long, unbounded-length chains are not desirable: planned donations can fail before transplant for a variety of reasons, and the failure of a single donation causes the rest of that chain to fail, so parallel shorter chains are better in practice.
Kidney exchange is a barter market where patients trade willing but medically incompatible donors. These trades occur via cycles, where each patient-donor pair both gives and receives a kidney, and … Kidney exchange is a barter market where patients trade willing but medically incompatible donors. These trades occur via cycles, where each patient-donor pair both gives and receives a kidney, and via chains, which begin with an altruistic donor who does not require a kidney in return. For logistical reasons, the maximum length of a cycle is typically limited to a small constant, while chains can be much longer. Given a compatibility graph of patient-donor pairs, altruists, and feasible potential transplants between them, finding even a maximum-cardinality set of vertex-disjoint cycles and chains is NP-hard. There has been much work on developing provably optimal solvers that are efficient in practice. One of the leading techniques has been branch and price, where column generation is used to incrementally bring cycles and chains into the optimization model on an as-needed basis. In particular, only positive-price columns need to be brought into the model. We prove that finding a positive-price chain is NP-complete. This shows incorrectness of two leading branch-and-price solvers that suggested polynomial-time chain pricing algorithms.
Many societal decision problems lie in high-dimensional continuous spaces not amenable to the voting techniques common for their discrete or single-dimensional counterparts. These problems are typically discretized before running an … Many societal decision problems lie in high-dimensional continuous spaces not amenable to the voting techniques common for their discrete or single-dimensional counterparts. These problems are typically discretized before running an election or decided upon through negotiation by representatives. We propose a algorithm called Iterative Local Voting for collective decision-making in this setting. In this algorithm, voters are sequentially sampled and asked to modify a candidate solution within some local neighborhood of its current value, as defined by a ball in some chosen norm, with the size of the ball shrinking at a specified rate. We first prove the convergence of this algorithm under appropriate choices of neighborhoods to Pareto optimal solutions with desirable fairness properties in certain natural settings: when the voters' utilities can be expressed in terms of some form of distance from their ideal solution, and when these utilities are additively decomposable across dimensions. In many of these cases, we obtain convergence to the societal welfare maximizing solution.We then describe an experiment in which we test our algorithm for the decision of the U.S. Federal Budget on Mechanical Turk with over 2,000 workers, employing neighborhoods defined by various L-Norm balls. We make several observations that inform future implementations of such a procedure.