Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (58)

AI systems increasingly support human decision-making. In many cases, despite the algorithm's superior performance, the final decision remains in human hands. For example, an AI may assist doctors in determining … AI systems increasingly support human decision-making. In many cases, despite the algorithm's superior performance, the final decision remains in human hands. For example, an AI may assist doctors in determining which diagnostic tests to run, but the doctor ultimately makes the diagnosis. This paper studies such AI-assisted decision-making settings, where the human learns through repeated interactions with the algorithm. In our framework, the algorithm -- designed to maximize decision accuracy according to its own model -- determines which features the human can consider. The human then makes a prediction based on their own less accurate model. We observe that the discrepancy between the algorithm's model and the human's model creates a fundamental tradeoff. Should the algorithm prioritize recommending more informative features, encouraging the human to recognize their importance, even if it results in less accurate predictions in the short term until learning occurs? Or is it preferable to forgo educating the human and instead select features that align more closely with their existing understanding, minimizing the immediate cost of learning? This tradeoff is shaped by the algorithm's time-discounted objective and the human's learning ability. Our results show that optimal feature selection has a surprisingly clean combinatorial characterization, reducible to a stationary sequence of feature subsets that is tractable to compute. As the algorithm becomes more "patient" or the human's learning improves, the algorithm increasingly selects more informative features, enhancing both prediction accuracy and the human's understanding. Notably, early investment in learning leads to the selection of more informative features than a later investment. We complement our analysis by showing that the impact of errors in the algorithm's knowledge is limited as it does not make the prediction directly.
We devise a general graph-theoretic framework for studying prophet inequalities. In this framework, an agent traverses a directed acyclic graph from a starting node $s$ to a target node $t$. … We devise a general graph-theoretic framework for studying prophet inequalities. In this framework, an agent traverses a directed acyclic graph from a starting node $s$ to a target node $t$. Each edge has a value that is sampled from a known distribution. When the agent reaches a node $v$ it observes the realized values of all the outgoing edges from $v$. The agent's objective is to maximize the expected total value of the path it takes. As in prophet inequalities, we compare the agent's performance against a prophet who observes all the realizations of the edges' values ahead of time. Our analysis reveals that this ratio highly depends on the number of paths $k$ required to cover all the nodes in the graph. In particular, we provide an algorithm that guarantees a prophet inequality ratio of $\frac{1}{2k}$ and show an upper bound of $\frac{1}{k+1}$. Our framework captures planning problems in which there is uncertainty regarding the costs/benefits of each action. In particular, it captures an over-time variant of the classic prophet inequality in which a seller leases a durable item, such as an apartment, for $n$ time units. Each period a lessee appears and may have a different value for each lease term. We obtain a tight bound of $1/2$ for this variant. To make this framework even more expressive, we further generalize it to accommodate correlations between edges originating from the same node and allow for additional constraints on the edges the agent can take. The generalized framework captures many well-studied prophet inequality problems, including $d$-dimensional matching, $k$-prophet inequality, and more.
A fundamental component in the theoretical school choice literature is the problem a student faces in deciding which schools to apply to. Recent models have considered a set of schools … A fundamental component in the theoretical school choice literature is the problem a student faces in deciding which schools to apply to. Recent models have considered a set of schools of different selectiveness and a student who is unsure of their strength and can apply to at most $k$ schools. Such models assume that the student cares solely about maximizing the quality of the school that they attend, but experience suggests that students' decisions are also influenced by a set of behavioral biases based on reputational effects: a subjective reputational benefit when admitted to a selective school, whether or not they attend; and a subjective loss based on disappointment when rejected. Guided by these observations, and inspired by recent behavioral economics work on loss aversion relative to expectations, we propose a behavioral model by which a student chooses schools to balance these behavioral effects with the quality of the school they attend. Our main results show that a student's choices change in dramatic ways when these reputation-based behavioral biases are taken into account. In particular, where a rational applicant spreads their applications evenly, a biased student applies very sparsely to highly selective schools, such that above a certain threshold they apply to only an absolute constant number of schools even as their budget of applications grows to infinity. Consequently, a biased student underperforms a rational student even when the rational student is restricted to a sufficiently large upper bound on applications and the biased student can apply to arbitrarily many. Our analysis shows that the reputation-based model is rich enough to cover a range of different ways that biased students cope with fear of rejection, including not just targeting less selective schools, but also occasionally applying to schools that are too selective, compared to rational students.
We study incentive-compatible mechanisms that maximize the Nash Social Welfare. Since traditional incentive-compatible mechanisms cannot maximize the Nash Social Welfare even approximately, we propose changing the traditional model. Inspired by … We study incentive-compatible mechanisms that maximize the Nash Social Welfare. Since traditional incentive-compatible mechanisms cannot maximize the Nash Social Welfare even approximately, we propose changing the traditional model. Inspired by a widely used charging method (e.g., royalties, a lawyer that charges some percentage of possible future compensation), we suggest charging the players some percentage of their value of the outcome. We call this model the \emph{percentage fee} model. We show that there is a mechanism that maximizes exactly the Nash Social Welfare in every setting with non-negative valuations. Moreover, we prove an analog of Roberts theorem that essentially says that if the valuations are non-negative, then the only implementable social choice functions are those that maximize weighted variants of the Nash Social Welfare. We develop polynomial time incentive compatible approximation algorithms for the Nash Social Welfare with subadditive valuations and prove some hardness results.
We study incentive-compatible mechanisms that maximize the Nash Social Welfare. Since traditional incentive-compatible mechanisms cannot maximize the Nash Social Welfare even approximately, we propose changing the traditional model. Inspired by … We study incentive-compatible mechanisms that maximize the Nash Social Welfare. Since traditional incentive-compatible mechanisms cannot maximize the Nash Social Welfare even approximately, we propose changing the traditional model. Inspired by a widely used charging method (e.g., royalties, a lawyer that charges some percentage of possible future compensation), we suggest charging the players some percentage of their value of the outcome. We call this model the percentage fee model.
A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to … A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to incorporate these findings into the theory of mechanism design. We consider players that have a preference for truth-telling and will only lie if their benefit from lying is sufficiently larger than the loss of the others. To accommodate such players, we introduce $\alpha$-moral mechanisms, in which the gain of a player from misreporting his true value, comparing to truth-telling, is at most $\alpha$ times the loss that the others incur due to misreporting. We develop a theory of moral mechanisms in the canonical setting of single-item auctions. We identify similarities and disparities to the standard theory of truthful mechanisms. In particular, we show that the allocation function does not uniquely determine the payments and is unlikely to admit a simple characterization. In contrast, recall that monotonicity characterizes the allocation function of truthful mechanisms. Our main technical effort is invested in determining whether the auctioneer can exploit the preference for truth-telling of the players to extract more revenue comparing to truthful mechanisms. We show that the auctioneer can extract more revenue when the values of the players are correlated, even when there are only two players. However, we show that truthful mechanisms are revenue-maximizing even among moral ones when the values of the players are independently drawn from certain identical distributions. As a by product we get an alternative proof to Myerson's characterization in the settings that we consider. We flesh out this approach by providing an alternative proof to Myerson's characterization that does not involve moral mechanisms whenever the values are independently drawn from regular distributions.
We consider trading indivisible and easily transferable \emph{durable goods}, which are goods that an agent can receive, use, and trade again for a different good. This is often the case … We consider trading indivisible and easily transferable \emph{durable goods}, which are goods that an agent can receive, use, and trade again for a different good. This is often the case with books that can be read and later exchanged for unread ones. Other examples of such easily transferable durable goods include puzzles, video games and baby clothes. We introduce a model for the exchange of easily transferable durable goods. In our model, each agent owns a set of items and demands a different set of items. An agent is interested in receiving as many items as possible from his demand set. We consider mechanisms that exchange items in cycles in which each participating agent receives an item that he demands and gives an item that he owns. We aim to develop mechanisms that have the following properties: they are \emph{efficient}, in the sense that they maximize the total number of items that agents receive from their demand set, they are \emph{strategyproof} (i.e., it is in the agents' best interest to report their preferences truthfully) and they run in \emph{polynomial time}. One challenge in developing mechanisms for our setting is that the supply and demand sets of the agents are updated after a trade cycle is executed. This makes constructing strategyproof mechanisms in our model significantly different from previous works, both technically and conceptually and requires developing new tools and techniques. We prove that simultaneously satisfying all desired properties is impossible and thus focus on studying the tradeoffs between these properties. To this end, we provide both approximation algorithms and impossibility results.
One of the central human biases studied in behavioral economics is reference dependence - people's tendency to evaluate an outcome not in absolute terms but instead relative to a reference … One of the central human biases studied in behavioral economics is reference dependence - people's tendency to evaluate an outcome not in absolute terms but instead relative to a reference point that reflects some notion of the status quo [4]. Reference dependence interacts closely with a related behavioral bias, loss aversion, in which people weigh losses more strongly than gains of comparable absolute values. Taken together, these two effects produce a fundamental behavioral regularity in human choices: once a reference point has been established, people tend to avoid outcomes in which they experience a loss relative to the reference point. A well-known instance of the effect is the empirical evidence that individual investors will tend to avoid selling a stock unless it has exceeded the price at which they purchased it.
We present a novel model for capturing the behavior of an agent exhibiting sunk-cost bias in a stochastic environment. Agents exhibiting sunk-cost bias take into account the effort they have … We present a novel model for capturing the behavior of an agent exhibiting sunk-cost bias in a stochastic environment. Agents exhibiting sunk-cost bias take into account the effort they have already spent on an endeavor when they evaluate whether to continue or abandon it. We model planning tasks in which an agent with this type of bias tries to reach a designated goal. Our model structures this problem as a type of Markov decision process: loosely speaking, the agent traverses a directed acyclic graph with probabilistic transitions, paying costs for its actions as it tries to reach a target node containing a specified reward. The agent's sunk cost bias is modeled by a cost that it incurs for abandoning the traversal: if the agent decides to stop traversing the graph, it incurs a cost of $\lambda \cdot C_{sunk}$, where ${\lambda \geq 0}$ is a parameter that captures the extent of the bias and $C_{sunk}$ is the sum of costs already invested. We analyze the behavior of two types of agents: naive agents that are unaware of their bias, and sophisticated agents that are aware of it. Since optimal (bias-free) behavior in this problem can involve abandoning the traversal before reaching the goal, the bias exhibited by these types of agents can result in sub-optimal behavior by shifting their decisions about abandonment. We show that in contrast to optimal agents, it is computationally hard to compute the optimal policy for a sophisticated agent. Our main results quantify the loss exhibited by these two types of agents with respect to an optimal agent. We present both general and topology-specific bounds.
A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to … A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to incorporate these findings into the theory of mechanism design. We consider players that have a preference for truth-telling and will only lie if their benefit from lying is sufficiently larger than the loss of the others. To accommodate such players, we introduce $\alpha$-moral mechanisms, in which the gain of a player from misreporting his true value, comparing to truth-telling, is at most $\alpha$ times the loss that the others incur due to misreporting. We develop a theory of moral mechanisms in the canonical setting of single-item auctions. We identify similarities and disparities to the standard theory of truthful mechanisms. In particular, we show that the allocation function does not uniquely determine the payments and is unlikely to admit a simple characterization. In contrast, recall that monotonicity characterizes the allocation function of truthful mechanisms. Our main technical effort is invested in determining whether the auctioneer can exploit the preference for truth-telling of the players to extract more revenue comparing to truthful mechanisms. We show that the auctioneer can extract more revenue when the values of the players are correlated, even when there are only two players. However, we show that truthful mechanisms are revenue-maximizing even among moral ones when the values of the players are independently drawn from certain identical distributions. As a by product we get an alternative proof to Myerson's characterization in the settings that we consider. We flesh out this approach by providing an alternative proof to Myerson's characterization that does not involve moral mechanisms whenever the values are independently drawn from regular distributions.
We present a novel model for capturing the behavior of an agent exhibiting sunk-cost bias in a stochastic environment. Agents exhibiting sunk-cost bias take into account the effort they have … We present a novel model for capturing the behavior of an agent exhibiting sunk-cost bias in a stochastic environment. Agents exhibiting sunk-cost bias take into account the effort they have already spent on an endeavor when they evaluate whether to continue or abandon it. We model planning tasks in which an agent with this type of bias tries to reach a designated goal. Our model structures this problem as a type of Markov decision process: loosely speaking, the agent traverses a directed acyclic graph with probabilistic transitions, paying costs for its actions as it tries to reach a target node containing a specified reward. The agent's sunk cost bias is modeled by a cost that it incurs for abandoning the traversal: if the agent decides to stop traversing the graph, it incurs a cost of $\lambda \cdot C_{sunk}$, where ${\lambda \geq 0}$ is a parameter that captures the extent of the bias and $C_{sunk}$ is the sum of costs already invested. We analyze the behavior of two types of agents: naive agents that are unaware of their bias, and sophisticated agents that are aware of it. Since optimal (bias-free) behavior in this problem can involve abandoning the traversal before reaching the goal, the bias exhibited by these types of agents can result in sub-optimal behavior by shifting their decisions about abandonment. We show that in contrast to optimal agents, it is computationally hard to compute the optimal policy for a sophisticated agent. Our main results quantify the loss exhibited by these two types of agents with respect to an optimal agent. We present both general and topology-specific bounds.
People are often reluctant to sell a house, or shares of stock, below the price at which they originally bought it. While this is generally not consistent with rational utility … People are often reluctant to sell a house, or shares of stock, below the price at which they originally bought it. While this is generally not consistent with rational utility maximization, it does reflect two strong empirical regularities that are central to the behavioral science of human decision-making: a tendency to evaluate outcomes relative to a reference point determined by context (in this case the original purchase price), and the phenomenon of loss aversion in which people are particularly prone to avoid outcomes below the reference point. Here we explore the implications of reference points and loss aversion in optimal stopping problems, where people evaluate a sequence of options in one pass, either accepting the option and stopping the search or giving up on the option forever. The best option seen so far sets a reference point that shifts as the search progresses, and a biased decision-maker's utility incurs an additional penalty when they accept a later option that is below this reference point. We formulate and study a behaviorally well-motivated version of the optimal stopping problem that incorporates these notions of reference dependence and loss aversion. We obtain tight bounds on the performance of a biased agent in this model relative to the best option obtainable in retrospect (a type of prophet inequality for biased agents), as well as tight bounds on the ratio between the performance of a biased agent and the performance of a rational one. We further establish basic monotonicity results, and show an exponential gap between the performance of a biased agent in a stopping problem with respect to a worst-case versus a random order. As part of this, we establish fundamental differences between optimal stopping problems for rational versus biased agents, and these differences inform our analysis.
We consider trading indivisible and easily transferable \emph{durable goods}, which are goods that an agent can receive, use, and trade again for a different good. This is often the case … We consider trading indivisible and easily transferable \emph{durable goods}, which are goods that an agent can receive, use, and trade again for a different good. This is often the case with books that can be read and later exchanged for unread ones. Other examples of such easily transferable durable goods include puzzles, video games and baby clothes. We introduce a model for the exchange of easily transferable durable goods. In our model, each agent owns a set of items and demands a different set of items. An agent is interested in receiving as many items as possible from his demand set. We consider mechanisms that exchange items in cycles in which each participating agent receives an item that he demands and gives an item that he owns. We aim to develop mechanisms that have the following properties: they are \emph{efficient}, in the sense that they maximize the total number of items that agents receive from their demand set, they are \emph{strategyproof} (i.e., it is in the agents' best interest to report their preferences truthfully) and they run in \emph{polynomial time}. One challenge in developing mechanisms for our setting is that the supply and demand sets of the agents are updated after a trade cycle is executed. This makes constructing strategyproof mechanisms in our model significantly different from previous works, both technically and conceptually and requires developing new tools and techniques. We prove that simultaneously satisfying all desired properties is impossible and thus focus on studying the tradeoffs between these properties. To this end, we provide both approximation algorithms and impossibility results.
Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and … Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and fraternal organizations. In this article, we introduce an analytic framework for studying the dynamics of exclusive social groups. In our model, every group member is characterized by his opinion, which is represented as a point on the real line. The group evolves in discrete time steps through a voting process carried out by the group’s members. Due to homophily, each member votes for the candidate who is more similar to him (i.e., closer to him on the line). An admission rule is then applied to determine which candidate, if any, is admitted. We consider several natural admission rules including majority and consensus. We ask: How do different admission rules affect the composition of the group in the long run? We study both growing groups (where new members join old ones) and fixed-size groups (where new members replace those who quit). Our analysis reveals intriguing phenomena and phase transitions, some of which are quite counterintuitive.
We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its … We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this paper we show how in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we study Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular. We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders -- in which the equilibrium allocation must be efficient -- when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. Our techniques reveal interesting connections between the LP relaxation of combinatorial auctions and local maxima. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.
We study a variant of Vickrey's classic bottleneck model. In our model there are $n$ agents and each agent strategically chooses when to join a first-come-first-served observable queue. Agents dislike … We study a variant of Vickrey's classic bottleneck model. In our model there are $n$ agents and each agent strategically chooses when to join a first-come-first-served observable queue. Agents dislike standing in line and they take actions in discrete time steps: we assume that each agent has a cost of $1$ for every time step he waits before joining the queue and a cost of $w>1$ for every time step he waits in the queue. At each time step a single agent can be processed. Before each time step, every agent observes the queue and strategically decides whether or not to join, with the goal of minimizing his expected cost. In this paper we focus on symmetric strategies which are arguably more natural as they require less coordination. This brings up the following twist to the usual price of anarchy question: what is the main source for the inefficiency of symmetric equilibria? is it the players' strategic behavior or the lack of coordination? We present results for two different parameter regimes that are qualitatively very different: (i) when $w$ is fixed and $n$ grows, we prove a tight bound of $2$ and show that the entire loss is due to the players' selfish behavior (ii) when $n$ is fixed and $w$ grows, we prove a tight bound of $\Theta \left(\sqrt{\frac{w}{n}}\right)$ and show that it is mainly due to lack of coordination: the same order of magnitude of loss is suffered by any symmetric profile.
We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its … We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this paper we show how in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we study Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular. We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders -- in which the equilibrium allocation must be efficient -- when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. Our techniques reveal interesting connections between the LP relaxation of combinatorial auctions and local maxima. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.
Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because … Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future.
Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because … Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future. We propose a theoretical model for planning with this pair of biases, and we show how certain natural behavioral phenomena can arise in our model only when agents exhibit both biases. As part of our model we differentiate between agents that are aware of their biases (sophisticated) and agents that are unaware of them (naive). Interestingly, we show that the interaction between the two biases is quite complex: in some cases, they mitigate each other's effects while in other cases they might amplify each other. We obtain a number of further results as well, including the fact that the planning problem in our model for an agent experiencing and aware of both biases is computationally hard in general, though tractable under more relaxed assumptions.
Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because … Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future. We propose a theoretical model for planning with this pair of biases, and we show how certain natural behavioral phenomena can arise in our model only when agents exhibit both biases. As part of our model we differentiate between agents that are aware of their biases (sophisticated) and agents that are unaware of them (naive). Interestingly, we show that the interaction between the two biases is quite complex: in some cases, they mitigate each other's effects while in other cases they might amplify each other. We obtain a number of further results as well, including the fact that the planning problem in our model for an agent experiencing and aware of both biases is computationally hard in general, though tractable under more relaxed assumptions.
Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and … Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and fraternal organizations. In the present paper we introduce an analytic framework for studying the dynamics of exclusive social groups. In our model, every group member is characterized by his opinion, which is represented as a point on the real line. The group evolves in discrete time steps through a voting process carried out by the group's members. Due to homophily, each member votes for the candidate who is more similar to him (i.e., closer to him on the line). An admission rule is then applied to determine which candidate, if any, is admitted. We consider several natural admission rules including majority and consensus.
Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject … Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that decision-making agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life.
Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and … Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and fraternal organizations. In the present paper we introduce an analytic framework for studying the dynamics of exclusive social groups. In our model, every group member is characterized by his opinion, which is represented as a point on the real line. The group evolves in discrete time steps through a voting process carried out by the group's members. Due to homophily, each member votes for the candidate who is more similar to him (i.e., closer to him on the line). An admission rule is then applied to determine which candidate, if any, is admitted. We consider several natural admission rules including majority and consensus. We ask: how do different admission rules affect the composition of the group in the long term? We study both growing groups (where new members join old ones) and fixed-size groups (where new members replace those who quit). Our analysis reveals intriguing phenomena and phase transitions, some of which are quite counterintuitive.
Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject … Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that the agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life. Models of sophisticated behavior have lacked an underlying formalism that allows one to reason over the full space of multi-step tasks that a sophisticated agent might face. This has made it correspondingly difficult to make comparative or worst-case statements about the performance of sophisticated agents in arbitrary scenarios. In this paper, we incorporate the notion of sophistication into a graph-theoretic model that we used in recent work for modeling naive agents. This new synthesis of two formalisms - sophistication and graph-theoretic planning - uncovers a rich structure that wasn't apparent in the earlier behavioral economics work on this problem. In particular, our graph-theoretic model makes two kinds of new results possible. First, we give tight worst-case bounds on the performance of sophisticated agents in arbitrary multi-step tasks relative to the optimal plan. Second, the flexibility of our formalism makes it possible to identify new phenomena that had not been seen in prior literature: these include a surprising non-monotonic property in the use of rewards to motivate sophisticated agents and a framework for reasoning about commitment devices.
Abstract One of the fundamental principles driving diversity or homogeneity in domains such as cultural differentiation, political affiliation, and product adoption is the tension between two forces: influence (the tendency … Abstract One of the fundamental principles driving diversity or homogeneity in domains such as cultural differentiation, political affiliation, and product adoption is the tension between two forces: influence (the tendency of people to become similar to others they interact with) and selection (the tendency to be affected most by the behavior of others who are already similar). Influence tends to promote homogeneity within a society, while selection frequently causes fragmentation. When both forces act simultaneously, it becomes an interesting question to analyze which societal outcomes should be expected. To study this issue more formally, we analyze a natural stylized model built upon active lines of work in political opinion formation, cultural diversity, and language evolution. We assume that the population is partitioned into “types” according to some traits (such as language spoken or political affiliation). While all types of people interact with one another, only people with sufficiently similar types can possibly influence one another. The “similarity” is captured by a graph on types in which individuals of the same or adjacent types can influence one another. We achieve an essentially complete characterization of (stable) equilibrium outcomes and prove convergence from all starting states. We also consider generalizations of this model.
Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and … Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and fraternal organizations. In the present paper we introduce an analytic framework for studying the dynamics of exclusive social groups. In our model, every group member is characterized by his opinion, which is represented as a point on the real line. The group evolves in discrete time steps through a voting process carried out by the group's members. Due to homophily, each member votes for the candidate who is more similar to him (i.e., closer to him on the line). An admission rule is then applied to determine which candidate, if any, is admitted. We consider several natural admission rules including majority and consensus. We ask: how do different admission rules affect the composition of the group in the long term? We study both growing groups (where new members join old ones) and fixed-size groups (where new members replace those who quit). Our analysis reveals intriguing phenomena and phase transitions, some of which are quite counterintuitive.
Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject … Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that the agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life. Models of sophisticated behavior have lacked an underlying formalism that allows one to reason over the full space of multi-step tasks that a sophisticated agent might face. This has made it correspondingly difficult to make comparative or worst-case statements about the performance of sophisticated agents in arbitrary scenarios. In this paper, we incorporate the notion of sophistication into a graph-theoretic model that we used in recent work for modeling naive agents. This new synthesis of two formalisms - sophistication and graph-theoretic planning - uncovers a rich structure that wasn't apparent in the earlier behavioral economics work on this problem. In particular, our graph-theoretic model makes two kinds of new results possible. First, we give tight worst-case bounds on the performance of sophisticated agents in arbitrary multi-step tasks relative to the optimal plan. Second, the flexibility of our formalism makes it possible to identify new phenomena that had not been seen in prior literature: these include a surprising non-monotonic property in the use of rewards to motivate sophisticated agents and a framework for reasoning about commitment devices.
A fundamental decision faced by a firm hiring employees --- and a familiar one to anyone who has dealt with the academic job market, for example --- is deciding what … A fundamental decision faced by a firm hiring employees --- and a familiar one to anyone who has dealt with the academic job market, for example --- is deciding what caliber of candidates to pursue. Should the firm try to increase its reputation by making offers to higher-quality candidates, despite the risk that the candidates might reject the offers and leave the firm empty-handed? Or is it better to play it safe and go for weaker candidates who are more likely to accept the offer? The question acquires an added level of complexity once we take into account the effect one hiring cycle has on the next: hiring better employees in the current cycle increases the firm's reputation, which in turn increases its attractiveness for higher-quality candidates in the next hiring cycle. These considerations introduce an interesting temporal dynamic aspect to the rich line of research on matching models for job markets, in which long-range planning and evolving reputational effects enter into the strategic decisions made by competing firms.
We study the necessity of interaction between individuals for obtaining approximately efficient economic allocations. We view this as a formalization of Hayek's classic point of view that focuses on the … We study the necessity of interaction between individuals for obtaining approximately efficient economic allocations. We view this as a formalization of Hayek's classic point of view that focuses on the information transfer advantages that markets have relative to centralized planning. We study two settings: combinatorial auctions with unit demand bidders (bipartite matching) and combinatorial auctions with subadditive bidders. In both settings we prove that non-interactive protocols require exponentially larger communication costs than interactive ones, even those that only use a modest amount of interaction.
In many settings, people exhibit behavior that is inconsistent across time --- we allocate a block of time to get work done and then procrastinate, or put effort into a … In many settings, people exhibit behavior that is inconsistent across time --- we allocate a block of time to get work done and then procrastinate, or put effort into a project and then later fail to complete it. An active line of research in behavioral economics and related fields has developed and analyzed models for this type of time-inconsistent behavior. Here we propose a graph-theoretic model of tasks and goals, in which dependencies among actions are represented by a directed graph, and a time-inconsistent agent constructs a path through this graph. We first show how instances of this path-finding problem on different input graphs can reconstruct a wide range of qualitative phenomena observed in the literature on time-inconsistency, including procrastination, abandonment of long-range tasks, and the benefits of reduced sets of choices. We then explore a set of analyses that quantify over the set of all graphs; among other results, we find that in any graph, there can be only polynomially many distinct forms of time-inconsistent behavior; and any graph in which a time-inconsistent agent incurs significantly more cost than an optimal agent must contain a large "procrastination" structure as a minor. Finally, we use this graph-theoretic model to explore ways in which tasks can be designed to help motivate agents to reach designated goals.
We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek's 1945 … We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek's 1945 classic paper. We consider this problem in the framework of simultaneous communication complexity. We analyze the amount of simultaneous communication required for achieving an approximately efficient allocation. In particular, we consider two settings: combinatorial auctions with unit demand bidders (bipartite matching) and combinatorial auctions with subadditive bidders. For both settings we first show that non-interactive systems have enormous communication costs relative to interactive ones. On the other hand, we show that limited interaction enables us to find approximately efficient allocations.
An active line of research has considered games played on networks in which payoffs depend on both a player's individual decision and also the decisions of his or her neighbors. … An active line of research has considered games played on networks in which payoffs depend on both a player's individual decision and also the decisions of his or her neighbors. Such games have been used to model issues including the formation of opinions (in which people wish to express views consistent with those of their friends) and technology adoption (in which people or firms seek compatibility with their network neighbors).
Human societies exhibit many forms of cultural diversity --- in the languages that are spoken, in the opinions and values that are held, and in many other dimensions. An active … Human societies exhibit many forms of cultural diversity --- in the languages that are spoken, in the opinions and values that are held, and in many other dimensions. An active body of research in the mathematical social sciences has developed models for reasoning about the origins of this diversity, and about how it evolves over time.
Human societies exhibit many forms of cultural diversity --- in the languages that are spoken, in the opinions and values that are held, and in many other dimensions. An active … Human societies exhibit many forms of cultural diversity --- in the languages that are spoken, in the opinions and values that are held, and in many other dimensions. An active body of research in the mathematical social sciences has developed models for reasoning about the origins of this diversity, and about how it evolves over time.
An active line of research has considered games played on networks in which payoffs depend on both a player's individual decision and also the decisions of his or her neighbors. … An active line of research has considered games played on networks in which payoffs depend on both a player's individual decision and also the decisions of his or her neighbors. Such games have been used to model issues including the formation of opinions (in which people wish to express views consistent with those of their friends) and technology adoption (in which people or firms seek compatibility with their network neighbors).
One of the fundamental principles driving diversity or homogeneity in domains such as cultural differentiation, political affiliation, and product adoption is the tension between two forces: influence (the tendency of … One of the fundamental principles driving diversity or homogeneity in domains such as cultural differentiation, political affiliation, and product adoption is the tension between two forces: influence (the tendency of people to become similar to others they interact with) and selection (the tendency to be affected most by the behavior of others who are already similar). Influence tends to promote homogeneity within a society, while selection frequently causes fragmentation. When both forces act simultaneously, it becomes an interesting question to analyze which societal outcomes should be expected. To study this issue more formally, we analyze a natural stylized model built upon active lines of work in political opinion formation, cultural diversity, and language evolution. We assume that the population is partitioned into "types" according to some traits (such as language spoken or political affiliation). While all types of people interact with one another, only people with sufficiently similar types can possibly influence one another. The "similarity" is captured by a graph on types in which individuals of the same or adjacent types can influence one another. We achieve an essentially complete characterization of (stable) equilibrium outcomes and prove convergence from all starting states. We also consider generalizations of this model.
An active line of research has considered games played on networks in which payoffs depend on both a player's individual decision and also the decisions of her neighbors. Such games … An active line of research has considered games played on networks in which payoffs depend on both a player's individual decision and also the decisions of her neighbors. Such games have been used to model issues including the formation of opinions and the adoption of technology. A basic question that has remained largely open in this area is to consider games where the strategies available to the players come from a fixed, discrete set, and where players may have different intrinsic preferences among the possible strategies. It is natural to model the tension among these different preferences by positing a distance function on the strategy set that determines a notion of "similarity" among strategies; a player's payoff is determined by the distance from her chosen strategy to her preferred strategy and to the strategies chosen by her network neighbors. Even when there are only two strategies available, this framework already leads to natural open questions about a version of the classical Battle of the Sexes problem played on a graph. We develop a set of techniques for analyzing this class of games, which we refer to as discrete preference games. We parametrize the games by the relative extent to which a player takes into account the effect of her preferred strategy and the effect of her neighbors' strategies, allowing us to interpolate between network coordination games and unilateral decision-making. When these two effects are balanced, we show that the price of stability is equal to 1 for any discrete preference game in which the distance function on the strategies is a tree metric; as a special case, this includes the Battle of the Sexes on a graph. We also show that trees form the maximal family of metrics for which the price of stability is 1, and produce a collection of metrics on which the price of stability converges to a tight bound of 2.
We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek's 1945 … We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek's 1945 classic paper. We consider this problem in the framework of simultaneous communication complexity. We analyze the amount of simultaneous communication required for achieving an approximately efficient allocation. In particular, we consider two settings: combinatorial auctions with unit demand bidders (bipartite matching) and combinatorial auctions with subadditive bidders. For both settings we first show that non-interactive systems have enormous communication costs relative to interactive ones. On the other hand, we show that limited interaction enables us to find approximately efficient allocations.
We introduce the class of pay or play games, which captures scenarios in which each decision maker is faced with a choice between two actions: one with a fixed payoff … We introduce the class of pay or play games, which captures scenarios in which each decision maker is faced with a choice between two actions: one with a fixed payoff and an- other with a payoff dependent on others' selected actions. This is, arguably, the simplest setting that models selection among certain and uncertain outcomes in a multi-agent system. We study the properties of equilibria in such games from both a game-theoretic perspective and a computational perspective. Our main positive result establishes the existence of a semi-strong equilibrium in every such game. We show that although simple, pay of play games contain a large variety of well-studied environments, e.g., vaccination games. We discuss the interesting implications of our results for these environments.
Coordination is a challenging everyday task; just think of the last time you organized a party or a meeting involving several people. As a growing part of our social and … Coordination is a challenging everyday task; just think of the last time you organized a party or a meeting involving several people. As a growing part of our social and professional life goes online, an opportunity for an improved coordination process arises. Recently, Gupta et al. proposed entangled queries as a declarative abstraction for data-driven coordination, where the difficulty of the coordination task is shifted from the user to the database. Unfortunately, evaluating entangled queries is very hard, and thus previous work considered only a restricted class of queries that satisfy safety (the coordination partners are fixed) and uniqueness (all queries need to be satisfied). In this paper we significantly extend the class of feasible entangled queries beyond uniqueness and safety. First, we show that we can simply drop uniqueness and still efficiently evaluate a set of safe entangled queries. Second, we show that as long as all users coordinate on the same set of attributes, we can give an efficient algorithm for coordination even if the set of queries does not satisfy safety. In an experimental evaluation we show that our algorithms are feasible for a wide spectrum of coordination scenarios.
Coordination is a challenging everyday task; just think of the last time you organized a party or a meeting involving several people. As a growing part of our social and … Coordination is a challenging everyday task; just think of the last time you organized a party or a meeting involving several people. As a growing part of our social and professional life goes online, an opportunity for an improved coordination process arises. Recently, Gupta et al. proposed entangled queries as a declarative abstraction for data-driven coordination, where the difficulty of the coordination task is shifted from the user to the database. Unfortunately, evaluating entangled queries is very hard, and thus previous work considered only a restricted class of queries that satisfy safety (the coordination partners are fixed) and uniqueness (all queries need to be satisfied). In this paper we significantly extend the class of feasible entangled queries beyond uniqueness and safety. First, we show that we can simply drop uniqueness and still efficiently evaluate a set of safe entangled queries. Second, we show that as long as all users coordinate on the same set of attributes, we can give an efficient algorithm for coordination even if the set of queries does not satisfy safety. In an experimental evaluation we show that our algorithms are feasible for a wide spectrum of coordination scenarios.

Commonly Cited References

A long-standing line of work in economic theory has studied models by which a group of people in a social network, each holding a numerical opinion, can arrive at a … A long-standing line of work in economic theory has studied models by which a group of people in a social network, each holding a numerical opinion, can arrive at a shared opinion through repeated averaging with their neighbors in the network. Motivated by the observation that consensus is rarely reached in real opinion dynamics, we study a related sociological model in which individuals' intrinsic beliefs counterbalance the averaging process and yield a diversity of opinions. By interpreting the repeated averaging as best-response dynamics in an underlying game with natural payoffs, and the limit of the process as an equilibrium, we are able to study the cost of disagreement in these models relative to a social optimum. We provide a tight bound on the cost at equilibrium relative to the optimum, our analysis draws a connection between these agreement models and extremal problems for generalized eigenvalues. We also consider a natural network design problem in this setting, where adding links to the underlying network can reduce the cost of disagreement at equilibrium.
In many settings, people exhibit behavior that is inconsistent across time --- we allocate a block of time to get work done and then procrastinate, or put effort into a … In many settings, people exhibit behavior that is inconsistent across time --- we allocate a block of time to get work done and then procrastinate, or put effort into a project and then later fail to complete it. An active line of research in behavioral economics and related fields has developed and analyzed models for this type of time-inconsistent behavior. Here we propose a graph-theoretic model of tasks and goals, in which dependencies among actions are represented by a directed graph, and a time-inconsistent agent constructs a path through this graph. We first show how instances of this path-finding problem on different input graphs can reconstruct a wide range of qualitative phenomena observed in the literature on time-inconsistency, including procrastination, abandonment of long-range tasks, and the benefits of reduced sets of choices. We then explore a set of analyses that quantify over the set of all graphs; among other results, we find that in any graph, there can be only polynomially many distinct forms of time-inconsistent behavior; and any graph in which a time-inconsistent agent incurs significantly more cost than an optimal agent must contain a large "procrastination" structure as a minor. Finally, we use this graph-theoretic model to explore ways in which tasks can be designed to help motivate agents to reach designated goals.
We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1/ 2 -ε must use exponentially many value queries, where … We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1/ 2 -ε must use exponentially many value queries, where m is the number of items. In contrast, ignoring incentives there exist constant ratio approximation algorithms for this problem. Our approach is based on a novel direct hardness technique that completely skips the notoriously hard step of characterizing truthful mechanisms. The characterization step was the main obstacle for proving impossibility results in algorithmic mechanism design so far. We demonstrate two additional applications of our new technique: (1) an impossibility result for universally-truthful polynomial time flexible combinatorial public projects and (2) an impossibility result for truthful-in-expectation mechanisms for exact combinatorial public projects. The latter is the first result that bounds the power of polynomial-time truthful in expectation mechanisms in any setting.
Statistical physics has proven to be a fruitful framework to describe phenomena outside the realm of traditional physics. Recent years have witnessed an attempt by physicists to study collective phenomena … Statistical physics has proven to be a fruitful framework to describe phenomena outside the realm of traditional physics. Recent years have witnessed an attempt by physicists to study collective phenomena emerging from the interactions of individuals as elementary units in social structures. A wide list of topics are reviewed ranging from opinion and cultural and language dynamics to crowd behavior, hierarchy formation, human dynamics, and social spreading. The connections between these problems and other, more traditional, topics of statistical physics are highlighted. Comparison of model results with empirical data from social systems are also emphasized.
Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject … Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that decision-making agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, … We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, … Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for non-monotone submodular functions. In particular, for any constant k, we present a (1/k+2+1/k+ε)-approximation for the submodular maximization problem under k matroid constraints, and a (1/5-ε)-approximation algorithm for this problem subject to k knapsack constraints (ε>0 is any constant). We improve the approximation guarantee of our algorithm to 1/k+1+{1/k-1}+ε for k≥2 partition matroid constraints. This idea also gives a ({1/k+ε)-approximation for maximizing a monotone submodular function subject to k≥2 partition matroids, which improves over the previously best known guarantee of 1/k+1.
Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight … Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution. Following Kleinberg and Oren (2014), we use a weighted {\it task graph} to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path.
Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because … Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future.
We investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is … We investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NP-hard, while deciding whether a game has a strong Nash equilibrium is SigmaP2-complete. We then study practically relevant restrictions that lower the complexity. In particular, we are interested in quantitative and qualitative restrictions of the way each players payoff depends on moves of other players. We say that a game has small neighborhood if the utility function for each player depends only on (the actions of) a logarithmically small number of other players. The dependency structure of a game G can be expressed by a graph DG(G) or by a hypergraph H(G). By relating Nash equilibrium problems to constraint satisfaction problems (CSPs), we show that if G has small neighborhood and if H(G) has bounded hypertree width (or if DG(G) has bounded treewidth), then finding pure Nash and Pareto equilibria is feasible in polynomial time. If the game is graphical, then these problems are LOGCFL-complete and thus in the class NC2 of highly parallelizable problems.
One of the fundamental questions of Algorithmic Mechanism Design is whether there exists an inherent clash between truthfulness and computational tractability: in particular, whether polynomial-time truthful mechanisms for combinatorial auctions … One of the fundamental questions of Algorithmic Mechanism Design is whether there exists an inherent clash between truthfulness and computational tractability: in particular, whether polynomial-time truthful mechanisms for combinatorial auctions are provably weaker in terms of approximation ratio than non-truthful ones. This question was very recently answered for universally truthful mechanisms for combinatorial auctions [4], and even for truthful-in-expectation mechanisms [12]. However, both of these results are based on information-theoretic arguments for valuations given by a value oracle, and leave open the possibility of polynomial-time truthful mechanisms for succinctly described classes of valuations.
Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location and marketing over social networks. In this paper we consider the … Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location and marketing over social networks. In this paper we consider the problem of maximizing any submodular function subject to $d$ knapsack constraints, where $d$ is a fixed constant. We establish a strong relation between the discrete problem and its continuous relaxation, obtained through {\em extension by expectation} of the submodular function. Formally, we show that, for any non-negative submodular function, an $\alpha$-approximation algorithm for the continuous relaxation implies a randomized $(\alpha - \eps)$-approximation algorithm for the discrete problem. We use this relation to improve the best known approximation ratio for the problem to $1/4- \eps$, for any $\eps > 0$, and to obtain a nearly optimal $(1-e^{-1}-\eps)-$approximation ratio for the monotone case, for any $\eps>0$. We further show that the probabilistic domain defined by a continuous solution can be reduced to yield a polynomial size domain, given an oracle for the extension by expectation. This leads to a deterministic version of our technique.
The authors consider processes on social networks that can potentially involve three factors: homophily, or the formation of social ties due to matching individual traits; social contagion, also known as … The authors consider processes on social networks that can potentially involve three factors: homophily, or the formation of social ties due to matching individual traits; social contagion, also known as social influence; and the causal effect of an individual's covariates on his or her behavior or other measurable responses. The authors show that generically, all of these are confounded with each other. Distinguishing them from one another requires strong assumptions on the parametrization of the social process or on the adequacy of the covariates used (or both). In particular the authors demonstrate, with simple examples, that asymmetries in regression coefficients cannot identify causal effects and that very simple models of imitation (a form of social contagion) can produce substantial correlations between an individual's enduring traits and his or her choices, even when there is no intrinsic affinity between them. The authors also suggest some possible constructive responses to these results.
Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report … Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report their true demand in response to prices. A line of research in economics, initiated by Hurwicz (1972), is devoted to understanding how such markets perform when agents are strategic about their demands. This is captured by the Walrasian Mechanism that proceeds by collecting reported demands, finding clearing prices in the reported market via an ascending price tatonnement procedure, and returns the resulting allocation. Similar mechanisms are used, for example, in the daily opening of the New York Stock Exchange and the call market for copper and gold in London.
Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of … Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is "which valuation domains admit truthful budget feasible mechanisms with `small' approximations (compared to the social optimum)?" Singer showed that additive and submodular functions have such constant approximations. Recently, Dobzinski, Papadimitriou, and Singer gave an O(log^2 n)-approximation mechanism for subadditive functions; they also remarked that: "A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive functions." We address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis. For the prior-free framework, we use an LP that describes the fractional cover of the valuation function; it is also connected to the concept of approximate core in cooperative game theory. We provide an O(I)-approximation mechanism for subadditive functions, via the worst case integrality gap I of LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, and for valuations with a constant I. XOS valuations are an important class of functions that lie between submodular and subadditive classes. We give another polynomial time O(log n/loglog n) sub-logarithmic approximation mechanism for subadditive valuations. For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
General criteria are given to ensure that in a family of discrete random processes, given parameters exhibit convergence to the solution of a system of differential equations. As one application … General criteria are given to ensure that in a family of discrete random processes, given parameters exhibit convergence to the solution of a system of differential equations. As one application we consider random graph processes in which the maximum degree is bounded and show that the numbers of vertices of given degree exhibit this convergence as the total number of vertices tends to infinity. Two other applications are to random processes which generate independent sets of vertices in random $r$-regular graphs. In these cases, we deduce almost sure lower bounds on the size of independent sets of vertices in random $r$-regular graphs.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Submodular Maximization by Simulated AnnealingShayan Oveis Gharan and Jan VondrákShayan Oveis Gharan and Jan … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Submodular Maximization by Simulated AnnealingShayan Oveis Gharan and Jan VondrákShayan Oveis Gharan and Jan Vondrákpp.1098 - 1116Chapter DOI:https://doi.org/10.1137/1.9781611973082.83PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the problem of maximizing a non-negative (possibly non-monotone) submodular set function with or without constraints. Feige et al. [9] showed a 2/5-approximation for the unconstrained problem and also proved that no approximation better than 1/2 is possible in the value oracle model. Constant-factor approximation has been also known for submodular maximization subject to a matroid independence constraint (a factor of 0.309 [33]) and for submodular maximization subject to a matroid base constraint, provided that the fractional base packing number v is bounded away from 1 (a 1/4-approximation assuming that v ≥ 2 [33]). In this paper, we propose a new algorithm for submodular maximization which is based on the idea of simulated annealing. We prove that this algorithm achieves improved approximation for two problems: a 0.41-approximation for unconstrained submodular maximization, and a 0.325-approximation for submodular maximization subject to a matroid independence constraint. On the hardness side, we show that in the value oracle model it is impossible to achieve a 0.478-approximation for submodular maximization subject to a matroid independence constraint, or a 0.394-approximation subject to a matroid base constraint in matroids with two disjoint bases. Even for the special case of cardinality constraint, we prove it is impossible to achieve a 0.491-approximation. (Previously it was conceivable that a 1/2-approximation exists for these problems.) It is still an open question whether a 1/2-approximation is possible for unconstrained submodular maximization. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept … We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept for such problems is Walrasian equilibrium (WE), which provides a simple and transparent pricing structure that achieves optimal social welfare. The main weakness of the WE notion is that it exists only in very restrictive cases. To overcome this limitation, we introduce the notion of a combinatorial Walrasian equilibium (CWE), a natural relaxation of WE. The difference between a CWE and a (noncombinatorial) WE is that the seller can package the items into indivisible bundles prior to sale, and the market does not necessarily clear. We show that every valuation profile admits a CWE that obtains at least half the optimal (unconstrained) social welfare. Moreover, we devise a polynomial time algorithm that, given an arbitrary allocation, computes a CWE that achieves at least half its welfare. Thus, the economic problem of finding a CWE with high social welfare reduces to the algorithmic problem of social-welfare approximation. In addition, we show that every valuation profile admits a CWE that extracts a logarithmic fraction of the optimal welfare as revenue. Finally, to motivate the use of bundles, we establish strong lower bounds when the seller is restricted to using item prices only. The strength of our results derives partly from their generality---our results hold for arbitrary valuations that may exhibit complex combinations of substitutes and complements.
We consider the problem of maximizing a non-negative submodular set function f:2N -> RR+ over a ground set N subject to a variety of packing type constraints including (multiple) matroid … We consider the problem of maximizing a non-negative submodular set function f:2N -> RR+ over a ground set N subject to a variety of packing type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular when f may be a non-monotone function. Our algorithms are based on (approximately) solving the multilinear extension F of f [5] over a polytope P that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully in some settings [6, 22, 24, 13, 3], it has been limited in some important ways. We overcome these limitations as follows.
How can minority cultures resist assimilation into a global monolith in an increasingly “small world”? Paradoxically, Axelrod found that local convergence can actually preserve global diversity if cultural influence is … How can minority cultures resist assimilation into a global monolith in an increasingly “small world”? Paradoxically, Axelrod found that local convergence can actually preserve global diversity if cultural influence is combined with homophily, the principle that “likes attract.” However, follow-up studies showed that this diversity collapses under random cultural perturbation. The authors discovered a new source of this fragility—the assumption in Axelrod’s model that cultural influence is interpersonal (dyadic). The authors replicated previous models but with the more empirically plausible assumption that influence is social—people can be simultaneously influenced by several network neighbors. Computational experiments show that cultural diversity then becomes much more robust than in Axelrod’s original model or in published variations that included either social influence or homophily but not both. The authors conclude that global diversity may be sustained not by cultural experimentation and innovation but by the ability of cultural groups to discourage those activities.
Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets … Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is which valuation domains admit truthful budget feasible mechanisms with 'small' approximations (compared to the social optimum)? Singer [35] showed that additive and submodular functions have a constant approximation mechanism. Recently, Dobzinski, Papadimitriou, and Singer [20] gave an O(log2n) approximation mechanism for subadditive functions; further, they remarked that: A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive In this paper, we address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis, are two standard approaches from computer science and economics, respectively. - For the prior-free framework, we use a linear program (LP) that describes the fractional cover of the valuation function; the LP is also connected to the concept of approximate core in cooperative game theory. We provide a mechanism for subadditive functions whose approximation is O(I), via the worst case integrality gap I of this LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, as well as for valuations having a constant integrality gap. XOS valuations are an important class of functions and lie between the submodular and the subadditive classes of valuations. We further give another polynomial time O(log n/(log log n)) sub-logarithmic approximation mechanism for subadditive functions. Both of our mechanisms improve the best known approximation ratio O(log2 n). - For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
Human societies exhibit many forms of cultural diversity --- in the languages that are spoken, in the opinions and values that are held, and in many other dimensions. An active … Human societies exhibit many forms of cultural diversity --- in the languages that are spoken, in the opinions and values that are held, and in many other dimensions. An active body of research in the mathematical social sciences has developed models for reasoning about the origins of this diversity, and about how it evolves over time.
Time-inconsistency refers to a paradox in decision making where agents exhibit inconsistent behaviors over time. Examples are procrastination where agents tend to postpone easy tasks, and abandonments where agents start … Time-inconsistency refers to a paradox in decision making where agents exhibit inconsistent behaviors over time. Examples are procrastination where agents tend to postpone easy tasks, and abandonments where agents start a plan and quit in the middle. To capture such behaviors and to quantify inefficiency caused by such behaviors, Kleinberg and Oren (2014) propose a graph model with a certain cost structure and initiate the study of several interesting computation problems: 1) cost ratio: the worst ratio between the actual cost of the agent and the optimal cost, over all the graph instances; 2) motivating subgraph: how to motivate the agent to reach the goal by deleting nodes and edges; 3) Intermediate rewards: how to incentivize agents to reach the goal by placing intermediate rewards. Kleinberg and Oren give partial answers to these questions, but the main problems are open. In this paper, we give answers to all three open problems. First, we show a tight upper bound of cost ratio for graphs, and confirm the conjecture by Kleinberg and Oren that Akerlof’s structure is indeed the worst case for cost ratio. Second, we prove that finding a motivating subgraph is NP-hard, showing that it is generally inefficient to motivate agents by deleting nodes and edges in the graph. Last but not least, we show that computing a strategy to place minimum amount of total reward is also NP-hard and we provide a 2n- approximation algorithm.
Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. … Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. We study the efficiency of Bayes-Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. While it is known that the social welfare of every pure Nash equilibrium (NE) constitutes a constant fraction of the optimal social welfare, a pure NE rarely exists, and moreover, the full information assumption is often unrealistic. Therefore, quantifying the welfare loss in Bayes-Nash equilibria is of particular interest. Previous work established a logarithmic bound on the ratio between the social welfare of a BNE and the expected optimal social welfare in both first-price auctions (Hassidim et al., 2011) and second-price auctions (Bhawalkar and Roughgarden, 2011), leaving a large gap between a constant and a logarithmic ratio. We introduce a new proof technique and use it to resolve both of these gaps in a unified way. Specifically, we show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions.
How can minority cultures resist assimilation into a global monolith in an increasingly “small world”? Paradoxically, Axelrod found that local convergence can actually preserve global diversity if ... How can minority cultures resist assimilation into a global monolith in an increasingly “small world”? Paradoxically, Axelrod found that local convergence can actually preserve global diversity if ...
Studies of cultural differentiation have shown that social mechanisms that normally lead to cultural convergence—homophily and influence—can also explain how distinct cultural groups can form. However, this emergent cultural diversity … Studies of cultural differentiation have shown that social mechanisms that normally lead to cultural convergence—homophily and influence—can also explain how distinct cultural groups can form. However, this emergent cultural diversity has proven to be unstable in the face of cultural drift—small errors or innovations that allow cultures to change from within. The authors develop a model of cultural differentiation that combines the traditional mechanisms of homophily and influence with a third mechanism of network homophily, in which network structure co-evolves with cultural interaction. Results show that in certain regions of the parameter space, these co-evolutionary dynamics can lead to patterns of cultural diversity that are stable in the presence of cultural drift. The authors address the implications of these findings for understanding the stability of cultural diversity in the face of increasing technological trends toward globalization.
Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, … Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. For the problem of maximizing a non-monotone submodular function, Feige, Mirrokni, and Vondrák recently developed a $2\over 5$-approximation algorithm \cite{FMV07}, however, their algorithms do not handle side constraints.} In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for {\em non-monotone} submodular functions. In particular, for any constant $k$, we present a $({1\over k+2+{1\over k}+ε})$-approximation for the submodular maximization problem under $k$ matroid constraints, and a $({1\over 5}-ε)$-approximation algorithm for this problem subject to $k$ knapsack constraints ($ε>0$ is any constant). We improve the approximation guarantee of our algorithm to ${1\over k+1+{1\over k-1}+ε}$ for $k\ge 2$ partition matroid constraints. This idea also gives a $({1\over k+ε})$-approximation for maximizing a {\em monotone} submodular function subject to $k\ge 2$ partition matroids, which improves over the previously best known guarantee of $\frac{1}{k+1}$.
In this paper we present improved bounds for approximating maximum matchings in bipartite graphs in the streaming model. First, we consider the question of how well maximum matching can be … In this paper we present improved bounds for approximating maximum matchings in bipartite graphs in the streaming model. First, we consider the question of how well maximum matching can be approximated in a single pass over the input when Õ(n) space is allowed, where n is the number of vertices in the input graph. Two natural variants of this problem have been considered in the literature: (1) the edge arrival setting, where edges arrive in the stream and (2) the vertex arrival setting, where vertices on one side of the graph arrive in the stream together with all their incident edges. The latter setting has also been studied extensively in the context of online algorithms, where each arriving vertex has to either be matched irrevocably or discarded upon arrival. In the online setting, the celebrated algorithm of Karp-Vazirani-Vazirani achieves a 1 − 1/e approximation by crucially using randomization (and using Õ(n) space). Despite the fact that the streaming model is less restrictive in that the algorithm is not constrained to match vertices irrevocably upon arrival, the best known approximation in the streaming model with vertex arrivals and Õ(n) space is the same factor of 1 − 1/e.
Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex … Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
We present fast algorithms for sketching valuation functions. Let N (| N | = n ) be some ground set and v :2 N → R be a function. We … We present fast algorithms for sketching valuation functions. Let N (| N | = n ) be some ground set and v :2 N → R be a function. We say that v˜:2 N → R is an α-sketch of v if for every set S we have that v ( S )/α ≤ v˜( S ) ≤ v ( S ) and v˜ can be described in poly ( n ) bits. Goemans et al. [SODA’09] showed that if v is submodular then there exists an õ (√ n )-sketch that can be constructed using polynomially many value queries (this is essentially the best possible, as Balcan and Harvey [STOC’11] show that no submodular function admits an n 1/3 - ϵ -sketch). Based on their work, Balcan et al. [COLT’12] and Badanidiyuru et al. [SODA’12] show that if v is subadditive, then there exists an õ (√ n )-sketch that can be constructed using polynomially many demand queries. All previous sketches are based on complicated geometric constructions. The first step in their constructions is proving the existence of a good sketch by finding an ellipsoid that “approximates” v well (this is done by applying John’s theorem to ensure the existence of an ellipsoid that is “close” to the polymatroid that is associated with v ). The second step is to show that this ellipsoid can be found efficiently, and this is done by repeatedly solving a certain convex program to obtain better approximations of John’s ellipsoid. In this article, we give a significantly simpler, nongeometric proof for the existence of good sketches and utilize the proof to obtain much faster algorithms that match the previously obtained approximation bounds. Specifically, we provide an algorithm that finds õ (√ n )-sketch of a submodular function with only õ ( n 3/2 value queries, and we provide an algorithm that finds õ (√ n )-sketch of a subadditive function with O ( n ) demand and value queries.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if … We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC … Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms.
We consider the following communication problem: Alice and Bob each have some valuation functions υ1(·) and υ2(·) over subsets of m items, and their goal is to partition the items … We consider the following communication problem: Alice and Bob each have some valuation functions υ1(·) and υ2(·) over subsets of m items, and their goal is to partition the items into S, in a way that maximizes the welfare, . We study both the allocation problem, which asks for a welfare-maximizing partition and the decision problem, which asks whether or not there exists a partition guaranteeing certain welfare, for binary XOS valuations. For interactive protocols with poly(m) communication, a tight 3/4-approximation is known for both [29, 23].For interactive protocols, the allocation problem is provably harder than the decision problem: any solution to the allocation problem implies a solution to the decision problem with one additional round and log m additional bits of communication via a trivial reduction. Surprisingly, the allocation problem is provably easier for simultaneous protocols. Specifically, we show:•There exists a simultaneous, randomized protocol with polynomial communication that selects a partition whose expected welfare is at least 3/4 of the optimum. This matches the guarantee of the best interactive, randomized protocol with polynomial communication.•For all ε > 0, any simultaneous, randomized protocol that decides whether the welfare of the optimal partition is ≥ 1 or ≤ 3/4 – 1/108 + ε correctly with probability > 1/2 + 1/poly(m) requires exponential communication. This provides a separation between the attainable approximation guarantees via interactive (3/4) versus simultaneous (≤ 3/4 – 1/108) protocols with polynomial communication.In other words, this trivial reduction from decision to allocation problems provably requires the extra round of communication. We further discuss the implications of our results for the design of truthful combinatorial auctions in general, and extensions to general XOS valuations. In particular, our protocol for the allocation problem implies a new style of truthful mechanisms.
We characterize the communication complexity of truthful mechanisms. Our departure point is the well known taxation principle. The taxation principle asserts that every truthful mechanism can be interpreted as follows: … We characterize the communication complexity of truthful mechanisms. Our departure point is the well known taxation principle. The taxation principle asserts that every truthful mechanism can be interpreted as follows: every player is presented with a menu that consists of a price for each bundle (the prices depend only on the valuations of the other players). Each player is allocated a bundle that maximizes his profit according to this menu. We define the taxation complexity of a truthful mechanism to be the logarithm of the maximum number of menus that may be presented to a player. Our main finding is that in general the taxation complexity essentially equals the communication complexity. The proof consists of two main steps. First, we prove that for rich enough domains the taxation complexity is at most the communication complexity. We then show that the taxation complexity is much smaller than the communication complexity only in "pathological" cases and provide a formal description of these extreme cases. Next, we study mechanisms that access the valuations via value queries only. In this setting we establish that the menu complexity - a notion that was already studied in several different contexts - characterizes the number of value queries that the mechanism makes in exactly the same way that the taxation complexity characterizes the communication complexity. Our approach yields several applications, including strengthening the solution concept with low communication overhead, fast computation of prices, and hardness of approximation by computationally efficient truthful mechanisms.
We analyze the revenue loss due to market shrinkage. Specifically, consider a simple market with one item for sale and n bidders whose values are drawn from some joint distribution. … We analyze the revenue loss due to market shrinkage. Specifically, consider a simple market with one item for sale and n bidders whose values are drawn from some joint distribution. Suppose that the market shrinks as a single bidder retires from the market. Suppose furthermore that the value of this retiring bidder is fixed and always strictly smaller than the values of the other bidders. We show that even this slight decrease in competition might cause a significant fall of a multiplicative factor of 1/e+1 ~0.268 in the revenue that can be obtained by a dominant strategy ex-post individually rational mechanism. In particular, our results imply a solution to an open question that was posed by Dobzinski, Fu, and Kleinberg [STOC'11].
We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its … We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this paper we show how in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we study Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular. We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders -- in which the equilibrium allocation must be efficient -- when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. Our techniques reveal interesting connections between the LP relaxation of combinatorial auctions and local maxima. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.
The endowment effect, coined by Nobel Laureate Richard Thaler, posits that people tend to inflate the value of items they own. This bias has been traditionally studied mainly using experimental … The endowment effect, coined by Nobel Laureate Richard Thaler, posits that people tend to inflate the value of items they own. This bias has been traditionally studied mainly using experimental methodology. Recently, Babaioff, Dobzinski and Oren (2018) proposed a specific formulation of the endowment effect in combinatorial markets, and showed that the existence of Walrasian equilibrium with respect to the endowed valuations (referred to as endowment equilibrium) extends from gross substitutes to submodular valuations, but provably fails to extend to more general valuations, like XOS.
Consider a player who can probe a sequence of n independent random variables X1, . . . , Xn with known distributions. After observing (the realized value of) Xi, the … Consider a player who can probe a sequence of n independent random variables X1, . . . , Xn with known distributions. After observing (the realized value of) Xi, the player needs to decide whether to stop and earn reward Xi, or reject the reward and probe the next variable Xi+1. The goal is to maximize the expected reward at the stopping time. This is an instance of the optimal stopping problem, which is a fundamental problem studied from many different aspects in mathematics, statistics, and computer science, and has found a wide variety of applications in sequential decision making and mechanism design.
The endowment effect , coined by Nobel Laureate Richard Thaler, posits that people tend to inflate the value of items they own. Recently, Babaioff, Dobzinski and Oren [EC'18] introduced the … The endowment effect , coined by Nobel Laureate Richard Thaler, posits that people tend to inflate the value of items they own. Recently, Babaioff, Dobzinski and Oren [EC'18] introduced the notion of endowed valuations --- valuations that capture the endowment effect --- and studied the stability and efficiency of combinatorial markets with endowed valuations. They showed that under a specific formulation of the endowment effect, an endowed equilibrium --- market equilibrium with respect to endowed valuations --- is guaranteed to exist in markets with submodular valuations, but fails to exist under XOS valuations. We harness the endowment effect further by introducing a general framework that captures a wide range of different formulations of the endowment effect. The different formulations are (partially) ranked from weak to strong, based on a stability-preserving order. We then provide algorithms for computing endowment equilibria with high welfare for sufficiently strong endowment effects, and non-existence results for weaker ones. Among other results, we prove the existence of endowment equilibria under XOS valuations, and show that if one can pre-pack items into irrevocable bundles then an endowment equilibrium exists for arbitrary markets.
The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. … The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. To explore the Web's ability for social mobilization, the Defense Advanced Research Projects Agency (DARPA) held the DARPA Network Challenge, in which competing teams were asked to locate 10 red weather balloons placed at locations around the continental United States. Using a recursive incentive mechanism that both spread information about the task and incentivized individuals to act, our team was able to find all 10 balloons in less than 9 hours, thus winning the Challenge. We analyzed the theoretical and practical properties of this mechanism and compared it with other approaches.