The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, …
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the l-out-of- k setting, where the decision maker can select up to k elements immediately and irrevocably, but her performance is measured by the top l elements in the selected set. Equivalently, the decision makes can hold up to l elements at any given point in time, but can make up to k-l returns as new elements arrive. We give upper and lower bounds on the competitive ratio of l-out-of- k prophet and secretary scenarios. For l-out-of- k prophet scenarios we provide a single-sample algorithm with competitive ratio 1-l· e-Θ((k-l)2/k) . The algorithm is a single-threshold algorithm, which sets a threshold that equals the (l+k/2)th highest sample, and accepts all values exceeding this threshold, up to reaching capacity k . On the other hand, we show that this result is tight if the number of possible returns is linear in l (i.e., k-l =Θ(l)). In particular, we show that no single-sample algorithm obtains a competitive ratio better than 1 - 2-(2k+1)/k+1 . We also present a deterministic single-threshold algorithm for the 1-out-of- k prophet setting which obtains a competitive ratio of 1-3/2 · e-s/k 6, knowing only the distribution of the maximum value. This result improves the result of [Assaf & Samuel-Cahn, J. of App. Prob., 2000].
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, …
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, which we term games with type ambiguity, where a participant lacks even information enabling him to form a belief on the preferences of others. Under type ambiguity, one cannot analyze the scenario using the commonly used Bayesian framework, and therefore he needs to model the participants using a different decision model. In this work, we present the ${\rm MINthenMAX}$ decision model under ambiguity. This model is a refinement of Wald's MiniMax principle, which we show to be too coarse for games with type ambiguity. We characterize ${\rm MINthenMAX}$ as the finest refinement of the MiniMax principle that satisfies three properties we claim are necessary for games with type ambiguity. This prior-less approach we present her also follows the common practice in computer science of worst-case analysis. Finally, we define and analyze the corresponding equilibrium concept assuming all players follow ${\rm MINthenMAX}$. We demonstrate this equilibrium by applying it to two common economic scenarios: coordination games and bilateral trade. We show that in both scenarios, an equilibrium in pure strategies always exists and we analyze the equilibria.
In many real-life scenarios, a group of agents needs to agree on a common action, e.g., on the location for a public facility, while there is some consistency between their …
In many real-life scenarios, a group of agents needs to agree on a common action, e.g., on the location for a public facility, while there is some consistency between their preferences, e.g., all preferences are derived from a common metric space. The facility location problem models such scenarios and it is a well-studied problem in social choice. We study mechanisms for facility location on graphs, which are resistant to manipulations (strategy-proof, abstention-proof, and false-name-proof) by both individuals and coalitions and are efficient (Pareto optimal). We define a family of graphs, ZV-line graphs, and show a general facility location mechanism for these graphs which satisfies all these desired properties. Moreover, we show that this mechanism can be computed in polynomial time, the mechanism is anonymous, and it can equivalently be defined as the first Pareto optimal location according to some predefined order. Our main result, the ZV-line graphs family and the mechanism we present for it, unifies the few current works in the literature of false-name-proof facility location on discrete graphs, including all the preliminary (unpublished) works we are aware of. Finally, we discuss some generalizations and limitations of our result for problems of facility location on other structures.
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, …
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the $\ell-out-of-k$ setting, where the decision maker can select up to $k$ elements immediately and irrevocably, but her performance is measured by the top $\ell$ elements in the selected set. Equivalently, the decision makes can hold up to $\ell$ elements at any given point in time, but can make up to $k-\ell$ returns as new elements arrive.
We give upper and lower bounds on the competitive ratio of $\ell$-out-of-$k$ prophet and secretary scenarios. These include a single-sample prophet algorithm that gives a competitive ratio of $1-\ell\cdot e^{-\Theta\left(\frac{\left(k-\ell\right)^2}{k}\right)}$, which is asymptotically tight for $k-\ell=\Theta(\ell)$. For secretary settings, we devise an algorithm that obtains a competitive ratio of $1-\ell e^{-\frac{k-8\ell}{2+2\ln \ell}} - e^{-k/6}$, and show that no secretary algorithm obtains a better ratio than $1-e^{-k}$ (up to negligible terms). In passing, our results lead to an improvement of the results of Assaf et al. [2000] for $1-out-of-k$ prophet scenarios.
Beyond the contribution to online algorithms and optimal stopping theory, our results have implications to mechanism design. In particular, we use our prophet algorithms to derive {\em overbooking} mechanisms with good welfare and revenue guarantees; these are mechanisms that sell more items than the seller's capacity, then allocate to the agents with the highest values among the selected agents.
This work deals with the implementation of social choice rules using dominant strategies for unrestricted preferences. The seminal Gibbard-Satterthwaite theorem shows that only few unappealing social choice rules can be …
This work deals with the implementation of social choice rules using dominant strategies for unrestricted preferences. The seminal Gibbard-Satterthwaite theorem shows that only few unappealing social choice rules can be implemented unless we assume some restrictions on the preferences or allow monetary transfers. When monetary transfers are allowed and quasi-linear utilities w.r.t. money are assumed, Vickrey-Clarke-Groves (VCG) mechanisms were shown to implement any affine-maximizer, and by the work of Roberts, only affine-maximizers can be implemented whenever the type sets of the agents are rich enough. In this work, we generalize these results and define a new class of preferences: Preferences which are positive-represented by a quasi-linear utility. That is, agents whose preference on a subspace of the outcomes can be modeled using a quasi-linear utility. We show that the characterization of VCG mechanisms as the incentive-compatible mechanisms extends naturally to this domain. Our result follows from a simple reduction to the characterization of VCG mechanisms. Hence, we see our result more as a fuller more correct version of the VCG characterization. This work also highlights a common misconception in the community attributing the VCG result to the usage of transferable utility. Our result shows that the incentive-compatibility of the VCG mechanisms does not rely on money being a common denominator, but rather on the ability of the designer to fine the agents on a continuous (maybe agent-specific) scale. We think these two insights, considering the utility as a representation and not as the preference itself (which is common in the economic community) and considering utilities which represent the preference only for the relevant domain, would turn out to fruitful in other domains as well.
We examine evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between aggregate risk and idiosyncratic risk. We show that the …
We examine evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between aggregate risk and idiosyncratic risk. We show that the choices that maximize the long-run growth rate are induced by a heterogeneous population in which the least and most risk averse agents are indifferent between aggregate risk and obtaining its linear and harmonic mean for sure, respectively. Moreover, an approximately optimal behavior can be induced by a simple distribution according to which all agents have constant relative risk aversion, and the coefficient of relative risk aversion is uniformly distributed between zero and two.
We examine the evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between alternatives with different levels of aggregate risk. We …
We examine the evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between alternatives with different levels of aggregate risk. We show that the choices that maximize the long-run growth rate are induced by a heterogeneous population in which the least and most risk-averse agents are indifferent between facing an aggregate risk and obtaining its linear and harmonic mean for sure, respectively. Moreover, approximately optimal behavior can be induced by a simple distribution according to which all agents have constant relative risk aversion, and the coefficient of relative risk aversion is uniformly distributed between zero and two.
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, …
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the $\ell-out-of-k$ setting, where the decision maker can select up to $k$ elements immediately and irrevocably, but her performance is measured by the top $\ell$ elements in the selected set. Equivalently, the decision makes can hold up to $\ell$ elements at any given point in time, but can make up to $k-\ell$ returns as new elements arrive. We give upper and lower bounds on the competitive ratio of $\ell$-out-of-$k$ prophet and secretary scenarios. These include a single-sample prophet algorithm that gives a competitive ratio of $1-\ell\cdot e^{-\Theta\left(\frac{\left(k-\ell\right)^2}{k}\right)}$, which is asymptotically tight for $k-\ell=\Theta(\ell)$. For secretary settings, we devise an algorithm that obtains a competitive ratio of $1-\ell e^{-\frac{k-8\ell}{2+2\ln \ell}} - e^{-k/6}$, and show that no secretary algorithm obtains a better ratio than $1-e^{-k}$ (up to negligible terms). In passing, our results lead to an improvement of the results of Assaf et al. [2000] for $1-out-of-k$ prophet scenarios. Beyond the contribution to online algorithms and optimal stopping theory, our results have implications to mechanism design. In particular, we use our prophet algorithms to derive {\em overbooking} mechanisms with good welfare and revenue guarantees; these are mechanisms that sell more items than the seller's capacity, then allocate to the agents with the highest values among the selected agents.
We examine evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between aggregate risk and idiosyncratic risk. We show that the …
We examine evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between aggregate risk and idiosyncratic risk. We show that the choices that maximize the long-run growth rate are induced by a heterogeneous population in which the least and most risk averse agents are indifferent between aggregate risk and obtaining its linear and harmonic mean for sure, respectively. Moreover, an approximately optimal behavior can be induced by a simple distribution according to which all agents have constant relative risk aversion, and the coefficient of relative risk aversion is uniformly distributed between zero and two.
We examine the evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between alternatives with different levels of aggregate risk. We …
We examine the evolutionary basis for risk aversion with respect to aggregate risk. We study populations in which agents face choices between alternatives with different levels of aggregate risk. We show that the choices that maximize the long-run growth rate are induced by a heterogeneous population in which the least and most risk-averse agents are indifferent between facing an aggregate risk and obtaining its linear and harmonic mean for sure, respectively. Moreover, approximately optimal behavior can be induced by a simple distribution according to which all agents have constant relative risk aversion, and the coefficient of relative risk aversion is uniformly distributed between zero and two.
This work deals with the implementation of social choice rules using dominant strategies for unrestricted preferences. The seminal Gibbard-Satterthwaite theorem shows that only few unappealing social choice rules can be …
This work deals with the implementation of social choice rules using dominant strategies for unrestricted preferences. The seminal Gibbard-Satterthwaite theorem shows that only few unappealing social choice rules can be implemented unless we assume some restrictions on the preferences or allow monetary transfers. When monetary transfers are allowed and quasi-linear utilities w.r.t. money are assumed, Vickrey-Clarke-Groves (VCG) mechanisms were shown to implement any affine-maximizer, and by the work of Roberts, only affine-maximizers can be implemented whenever the type sets of the agents are rich enough. In this work, we generalize these results and define a new class of preferences: Preferences which are positive-represented by a quasi-linear utility. That is, agents whose preference on a subspace of the outcomes can be modeled using a quasi-linear utility. We show that the characterization of VCG mechanisms as the incentive-compatible mechanisms extends naturally to this domain. Our result follows from a simple reduction to the characterization of VCG mechanisms. Hence, we see our result more as a fuller more correct version of the VCG characterization. This work also highlights a common misconception in the community attributing the VCG result to the usage of transferable utility. Our result shows that the incentive-compatibility of the VCG mechanisms does not rely on money being a common denominator, but rather on the ability of the designer to fine the agents on a continuous (maybe agent-specific) scale. We think these two insights, considering the utility as a representation and not as the preference itself (which is common in the economic community) and considering utilities which represent the preference only for the relevant domain, would turn out to fruitful in other domains as well.
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, …
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the l-out-of- k setting, where the decision maker can select up to k elements immediately and irrevocably, but her performance is measured by the top l elements in the selected set. Equivalently, the decision makes can hold up to l elements at any given point in time, but can make up to k-l returns as new elements arrive. We give upper and lower bounds on the competitive ratio of l-out-of- k prophet and secretary scenarios. For l-out-of- k prophet scenarios we provide a single-sample algorithm with competitive ratio 1-l· e-Θ((k-l)2/k) . The algorithm is a single-threshold algorithm, which sets a threshold that equals the (l+k/2)th highest sample, and accepts all values exceeding this threshold, up to reaching capacity k . On the other hand, we show that this result is tight if the number of possible returns is linear in l (i.e., k-l =Θ(l)). In particular, we show that no single-sample algorithm obtains a competitive ratio better than 1 - 2-(2k+1)/k+1 . We also present a deterministic single-threshold algorithm for the 1-out-of- k prophet setting which obtains a competitive ratio of 1-3/2 · e-s/k 6, knowing only the distribution of the maximum value. This result improves the result of [Assaf & Samuel-Cahn, J. of App. Prob., 2000].
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, …
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the $\ell-out-of-k$ setting, where the decision maker can select up to $k$ elements immediately and irrevocably, but her performance is measured by the top $\ell$ elements in the selected set. Equivalently, the decision makes can hold up to $\ell$ elements at any given point in time, but can make up to $k-\ell$ returns as new elements arrive.
We give upper and lower bounds on the competitive ratio of $\ell$-out-of-$k$ prophet and secretary scenarios. These include a single-sample prophet algorithm that gives a competitive ratio of $1-\ell\cdot e^{-\Theta\left(\frac{\left(k-\ell\right)^2}{k}\right)}$, which is asymptotically tight for $k-\ell=\Theta(\ell)$. For secretary settings, we devise an algorithm that obtains a competitive ratio of $1-\ell e^{-\frac{k-8\ell}{2+2\ln \ell}} - e^{-k/6}$, and show that no secretary algorithm obtains a better ratio than $1-e^{-k}$ (up to negligible terms). In passing, our results lead to an improvement of the results of Assaf et al. [2000] for $1-out-of-k$ prophet scenarios.
Beyond the contribution to online algorithms and optimal stopping theory, our results have implications to mechanism design. In particular, we use our prophet algorithms to derive {\em overbooking} mechanisms with good welfare and revenue guarantees; these are mechanisms that sell more items than the seller's capacity, then allocate to the agents with the highest values among the selected agents.
In many real-life scenarios, a group of agents needs to agree on a common action, e.g., on the location for a public facility, while there is some consistency between their …
In many real-life scenarios, a group of agents needs to agree on a common action, e.g., on the location for a public facility, while there is some consistency between their preferences, e.g., all preferences are derived from a common metric space. The facility location problem models such scenarios and it is a well-studied problem in social choice. We study mechanisms for facility location on graphs, which are resistant to manipulations (strategy-proof, abstention-proof, and false-name-proof) by both individuals and coalitions and are efficient (Pareto optimal). We define a family of graphs, ZV-line graphs, and show a general facility location mechanism for these graphs which satisfies all these desired properties. Moreover, we show that this mechanism can be computed in polynomial time, the mechanism is anonymous, and it can equivalently be defined as the first Pareto optimal location according to some predefined order. Our main result, the ZV-line graphs family and the mechanism we present for it, unifies the few current works in the literature of false-name-proof facility location on discrete graphs, including all the preliminary (unpublished) works we are aware of. Finally, we discuss some generalizations and limitations of our result for problems of facility location on other structures.
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, …
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the $\ell-out-of-k$ setting, where the decision maker can select up to $k$ elements immediately and irrevocably, but her performance is measured by the top $\ell$ elements in the selected set. Equivalently, the decision makes can hold up to $\ell$ elements at any given point in time, but can make up to $k-\ell$ returns as new elements arrive. We give upper and lower bounds on the competitive ratio of $\ell$-out-of-$k$ prophet and secretary scenarios. These include a single-sample prophet algorithm that gives a competitive ratio of $1-\ell\cdot e^{-\Theta\left(\frac{\left(k-\ell\right)^2}{k}\right)}$, which is asymptotically tight for $k-\ell=\Theta(\ell)$. For secretary settings, we devise an algorithm that obtains a competitive ratio of $1-\ell e^{-\frac{k-8\ell}{2+2\ln \ell}} - e^{-k/6}$, and show that no secretary algorithm obtains a better ratio than $1-e^{-k}$ (up to negligible terms). In passing, our results lead to an improvement of the results of Assaf et al. [2000] for $1-out-of-k$ prophet scenarios. Beyond the contribution to online algorithms and optimal stopping theory, our results have implications to mechanism design. In particular, we use our prophet algorithms to derive {\em overbooking} mechanisms with good welfare and revenue guarantees; these are mechanisms that sell more items than the seller's capacity, then allocate to the agents with the highest values among the selected agents.
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, …
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, which we term games with type ambiguity, where a participant lacks even information enabling him to form a belief on the preferences of others. Under type ambiguity, one cannot analyze the scenario using the commonly used Bayesian framework, and therefore he needs to model the participants using a different decision model. In this work, we present the ${\rm MINthenMAX}$ decision model under ambiguity. This model is a refinement of Wald's MiniMax principle, which we show to be too coarse for games with type ambiguity. We characterize ${\rm MINthenMAX}$ as the finest refinement of the MiniMax principle that satisfies three properties we claim are necessary for games with type ambiguity. This prior-less approach we present her also follows the common practice in computer science of worst-case analysis. Finally, we define and analyze the corresponding equilibrium concept assuming all players follow ${\rm MINthenMAX}$. We demonstrate this equilibrium by applying it to two common economic scenarios: coordination games and bilateral trade. We show that in both scenarios, an equilibrium in pure strategies always exists and we analyze the equilibria.
Our understanding of risk preferences can be sharpened by considering their evolutionary basis. The existing literature has focused on two sources of risk: idiosyncratic risk and aggregate risk. We introduce …
Our understanding of risk preferences can be sharpened by considering their evolutionary basis. The existing literature has focused on two sources of risk: idiosyncratic risk and aggregate risk. We introduce a new source of risk—heritable risk—in which there is a positive correlation between the fitness of a newborn agent and the fitness of her parent. Heritable risk was plausibly common in our evolutionary past and it leads to a strictly higher growth rate than the other sources of risk. We show that the presence of heritable risk in the evolutionary past may explain the tendency of people to exhibit skewness loving today.
If a population is growing in a randomly varying environment, such that the finite rate of increase per generation is a random variable with no serial autocorrelation, the logarithm of …
If a population is growing in a randomly varying environment, such that the finite rate of increase per generation is a random variable with no serial autocorrelation, the logarithm of population size at any time t is normally distributed. Even though the expectation of population size may grow infinitely large with time, the probability of extinction may approach unity, owing to the difference between the geometric and arithmetic mean growth rates.
Abstract Develops a structuralist understanding of mathematics, as an alternative to set‐ or type‐theoretic foundations, that respects classical mathematical truth while minimizing Platonist commitments to abstract entities. Modal logic is …
Abstract Develops a structuralist understanding of mathematics, as an alternative to set‐ or type‐theoretic foundations, that respects classical mathematical truth while minimizing Platonist commitments to abstract entities. Modal logic is combined with notions of part/whole (mereology) enabling a systematic interpretation of ordinary mathematical statements as asserting what would be the case in any (suitable) structure there (logically) might be, e.g. for number theory, functional analysis, algebra, pure geometry, etc. Structures are understood as comprising objects, whatever their nature, standing in suitable relations as given by axioms or defining conditions in mathematics proper. The characterization of structures is aided by the addition of plural quantifiers, e.g. ‘Any objects of sort F’ corresponding to arbitrary collections of Fs, achieving the expressive power of second‐order logic, hence a full logic of relations. (See the author's ‘Structuralism without Structures’, Philosophia Mathematica4 (1996): 100–123.) Claims of absolute existence of structures are replaced by claims of (logical) possibility of enough structurally interrelated objects (modal‐existence postulates). The vast bulk of ordinary mathematics, and scientific applications, can thus be recovered on the basis of the possibility of a countable infinity of atoms. As applied to set theory itself, these ideas lead to a ‘many worlds’—– as opposed to the standard ‘fixed universe’—view, inspired by Zermelo (1930), respecting the unrestricted, indefinite extendability of models of the Zermelo–Fraenkel axioms. Natural motivation for (‘small’) large cardinal axioms is thus provided. In sum, the vast bulk of abstract mathematics is respected as objective, while literal reference to abstracta and related problems with Platonism are eliminated.
The well-known Impossibility Theorem of Arrow asserts that any generalized social welfare function (GSWF) with at least three alternatives, which satisfies Independence of Irrelevant Alternatives (IIA) and Unanimity and is …
The well-known Impossibility Theorem of Arrow asserts that any generalized social welfare function (GSWF) with at least three alternatives, which satisfies Independence of Irrelevant Alternatives (IIA) and Unanimity and is not a dictatorship, is necessarily non-transitive. In 2002, Kalai asked whether one can obtain the following quantitative version of the theorem: For any \epsilon>0 , there exists \delta=\delta(\epsilon) such that if a GSWF on three alternatives satisfies the IIA condition and its probability of non-transitive outcome is at most \delta , then the GSWF is at most \epsilon -far from being a dictatorship or from breaching the Unanimity condition. In 2009, Mossel proved such quantitative version, with \delta(\epsilon)=\exp(-C/\epsilon^{21}) , and generalized it to GSWFs with k alternatives, for all k \geq 3 . In this paper we show that the quantitative version holds with \delta(\epsilon)=C \cdot \epsilon^3 , and that this result is tight up to logarithmic factors. Furthermore, our result (like Mossel's) generalizes to GSWFs with k alternatives. Our proof is based on the works of Kalai and Mossel, but uses also an additional ingredient: a combination of the Bonami-Beckner hypercontractive inequality with a reverse hypercontractive inequality due to Borell, applied to find simultaneously upper bounds and lower bounds on the "noise correlation'' between Boolean functions on the discrete cube.
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined …
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
Without monetary payments, the Gibbard-Satterthwaite theorem proves that under mild requirements all truthful social choice mechanisms must be dictatorships. When payments are allowed, the Vickrey-Clarke-Groves (VCG) mechanism implements the value-maximizing …
Without monetary payments, the Gibbard-Satterthwaite theorem proves that under mild requirements all truthful social choice mechanisms must be dictatorships. When payments are allowed, the Vickrey-Clarke-Groves (VCG) mechanism implements the value-maximizing choice, and has many other good properties: it is strategy-proof, onto, deterministic, individually rational, and does not not make positive transfers to the agents. By Roberts' theorem, with three or more alternatives, the weighted VCG mechanisms are essentially unique for domains with quasi-linear utilities. The goal of this paper is to characterize domains of non-quasi-linear utilities where "reasonable'' mechanisms (with VCG-like properties) exist. Our main result is a tight characterization of the maximal non quasi-linear utility domain, which we call the largest parallel domain. We extend Roberts' theorem to parallel domains, and use the generalized theorem to prove two impossibility results. First, any reasonable mechanism must be dictatorial when the type domain is quasi-linear together with any single non-parallel type. Second, for richer utility domains that still differ very slightly from quasi-linearity, every strategy-proof, onto and deterministic mechanism must be a dictatorship.
For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with …
For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order.
We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou …
We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis et al. [2006a] on the complexity of four-player Nash equilibria, settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of two-player Nash equilibria. In particular, we prove the following theorems: —Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. —The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: —Arrow-Debreu market equilibria are PPAD -hard to compute.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items …
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [33] and Feldman et al. [17] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1 – 1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [45] and Esfandiari et al. [15] who worked in the special cases where we can fully control the arrival order or when there is only a single item.Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. …
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be …
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints (i.e., that the total allocation of each item should not exceed the supply). Our framework is general in the sense that it makes no direct assumption about agents' valuations, type distributions, or single agent constraints (e.g., budget, incentive compatibility, etc.). We present two generic multiagent mechanisms which use single agent mechanisms as black boxes. If an $\alpha$-approximate single agent mechanism is available for each agent, and assuming no agent ever demands more than $\frac{1}{k}$ of all units of each item, our generic multiagent mechanisms are $\gamma_{k}\alpha$-approximations of the optimal multiagent mechanism, where $\gamma_{k}$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. As a byproduct of our construction, we present a generalization of prophet inequalities where both gambler and prophet are allowed to pick $k$ numbers each to receive a reward equal to their sum. Finally, we use our framework to obtain multiagent mechanisms with improved approximation factor for several settings from the literature.
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of …
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of a candidate is the sum of its distances to each voter. Some of our work assumes that these points can be modeled on the real line, but other results of ours are more general.
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold …
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms, and is used to derive new and improved results for combinatorial markets (with and without complements), multi-dimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
The well-known Impossibility Theorem of Arrow asserts that any Generalized Social Welfare Function (GSWF) with at least three alternatives, which satisfies Independence of Irrelevant Alternatives (IIA) and Unanimity and is …
The well-known Impossibility Theorem of Arrow asserts that any Generalized Social Welfare Function (GSWF) with at least three alternatives, which satisfies Independence of Irrelevant Alternatives (IIA) and Unanimity and is not a dictatorship, is necessarily non-transitive. In 2002, Kalai asked whether one can obtain the following quantitative version of the theorem: For any $\epsilon>0$, there exists $\delta=\delta(\epsilon)$ such that if a GSWF on three alternatives satisfies the IIA condition and its probability of non-transitive outcome is at most $\delta$, then the GSWF is at most $\epsilon$-far from being a dictatorship or from breaching the Unanimity condition. In 2009, Mossel proved such quantitative version, with $\delta(\epsilon)=\exp(-C/\epsilon^{21})$, and generalized it to GSWFs with $k$ alternatives, for all $k \geq 3$. In this paper we show that the quantitative version holds with $\delta(\epsilon)=C \cdot \epsilon^3$, and that this result is tight up to logarithmic factors. Furthermore, our result (like Mossel's) generalizes to GSWFs with $k$ alternatives. Our proof is based on the works of Kalai and Mossel, but uses also an additional ingredient: a combination of the Bonami-Beckner hypercontractive inequality with a reverse hypercontractive inequality due to Borell, applied to find simultaneously upper bounds and lower bounds on the noise correlation between Boolean functions on the discrete cube.
Let X i ≥ 0 be independent, i = 1,…, n , with known distributions and let X n * = max( X 1 ,…, X n ). The classical …
Let X i ≥ 0 be independent, i = 1,…, n , with known distributions and let X n * = max( X 1 ,…, X n ). The classical ‘ratio prophet inequality’ compares the return to a prophet, which is E X n * , to that of a mortal, who observes the X i s sequentially, and must resort to a stopping rule t . The mortal's return is V ( X 1 ,…, X n ) = max E X t , where the maximum is over all stopping rules. The classical inequality states that E X n * < 2 V ( X 1 ,…, X n ). In the present paper the mortal is given k ≥ 1 chances to choose. If he uses stopping rules t 1 ,…, t k his return is E(max( X t 1 ,…, X t k )). Let t ( b ) be the ‘simple threshold stopping rule’ defined to be the smallest i for which X i ≥ b , or n if there is no such i . We show that there always exists a proper choice of k thresholds, such that E X n * ≤ (( k +1)/ k )Emax( X t 1 ,…, X t k )), where t i is of the form t ( b i ) with some added randomization. Actually the thresholds can be taken to be the j /( k +1) percentile points of the distribution of X n * , j = 1,…, k , and hence only knowledge of the distribution of X n * is needed.
It is proved that Groves’ scheme is unique on restricted domains which are smoothly connected, in particular convex domains. This generalizes earlier uniqueness results by Green and Laffont and Walker. …
It is proved that Groves’ scheme is unique on restricted domains which are smoothly connected, in particular convex domains. This generalizes earlier uniqueness results by Green and Laffont and Walker. An example shows that uniqueness may be lost if the domain is not smoothly connected.