Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (29)

We give new bounds for the single-nomination model of impartial selection, a problem proposed by Holzman and Moulin (Econometrica, 2013). A selection mechanism, which may be randomized, selects one individual … We give new bounds for the single-nomination model of impartial selection, a problem proposed by Holzman and Moulin (Econometrica, 2013). A selection mechanism, which may be randomized, selects one individual from a group of n based on nominations among members of the group; a mechanism is impartial if the selection of an individual is independent of nominations cast by that individual, and α-optimal if under any circumstance the expected number of nominations received by the selected individual is at least α times that received by any individual. In a many-nominations model, where individuals may cast an arbitrary number of nominations, the so-called permutation mechanism is 1/2-optimal, and this is best possible. In the single-nomination model, where each individual casts exactly one nomination, the permutation mechanism does better and prior to this work was known to be 67/108-optimal but no better than 2/3-optimal. We show that it is in fact 2/3-optimal for all n. This result is obtained via tight bounds on the performance of the mechanism for graphs with maximum degree Δ, for any Δ, which we prove using an adversarial argument. We then show that the permutation mechanism is not best possible; indeed, by combining the permutation mechanism, another mechanism called plurality with runner-up, and some new ideas, 2105/3147-optimality can be achieved for all n. We finally give new upper bounds on α for any α-optimal impartial mechanism. They improve on the existing upper bounds for all n ≥ 7 and imply that no impartial mechanism can be better than 76/105-optimal for all n; they do not preclude the existence of a (3/4 − ε)-optimal impartial mechanism for arbitrary ε > 0 if n is large.
We consider the classical model of sponsored search due to Edelman et al. and Varian and examine how robust standard position auctions are to a misspecification of the position-dependent quality … We consider the classical model of sponsored search due to Edelman et al. and Varian and examine how robust standard position auctions are to a misspecification of the position-dependent quality factors used by this model. We show that under both complete and incomplete information a nontruthful position auction admits an efficient equilibrium for a strictly broader range of parameter values than the Vickrey-Clarke-Groves (VCG) mechanism, which would be truthful if the parameters were specified correctly. Our result for complete information concerns the generalized second-price (GSP) mechanism and is driven by a detailed understanding of the Nash equilibrium polytopes of the VCG mechanism and the GSP mechanism. Our result for incomplete information concerns the generalized first-price (GFP) mechanism and uses a surprising connection between the unique candidate equilibrium bidding functions of the VCG mechanism and the GFP mechanism. Funding: F. Fischer was supported by the Engineering and Physical Sciences Research Council [Grant EP/T015187/1] and Einstein Stiftung Berlin.
We study functions that produce a ranking of $n$ individuals from $n$ such rankings and are impartial in the sense that the position of an individual in the output ranking … We study functions that produce a ranking of $n$ individuals from $n$ such rankings and are impartial in the sense that the position of an individual in the output ranking does not depend on the input ranking submitted by that individual. When $n \geq 4$, two properties concerning the quality of the output in relation to the input can be achieved in addition to impartiality: individual full rank, which requires that each individual can appear in any position of the output ranking; and monotonicity, which requires that an individual cannot move down in the output ranking if it moves up in an input ranking. When $n \geq 5$, monotonicity can be dropped to strengthen individual full rank to weak unanimity, requiring that a ranking submitted by every individual must be chosen as the output ranking. Mechanisms achieving these results can be implemented in polynomial time. Both results are best possible in terms of their dependence on $n$. The second result cannot be strengthened further to a notion of unanimity that requires agreement on pairwise comparisons to be preserved.
Impartial selection is the selection of an individual from a group based on nominations by other members of the group, in such a way that individuals cannot influence their own … Impartial selection is the selection of an individual from a group based on nominations by other members of the group, in such a way that individuals cannot influence their own chance of selection. We give a deterministic mechanism with an additive performance guarantee of O(n(1+κ)/2) in a setting with n individuals where each individual casts O(nκ) nominations, where κ∈[0,1]. For κ=0, i.e. when each individual casts at most a constant number of nominations, this bound is O(√n). This matches the best-known guarantee for randomized mechanisms and a single nomination. For κ=1 the bound is O(n). This is trivial, as even a mechanism that never selects provides an additive guarantee of n-1. We show, however, that it is also best possible: for every deterministic impartial mechanism there exists a situation in which some individual is nominated by every other individual and the mechanism either does not select or selects an individual not nominated by anyone.
A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who sequentially observes random variables X1, . . . , Xn and … A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who sequentially observes random variables X1, . . . , Xn and selects one of them is at least an α fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional βn samples for some β ≥ 0. We first give improved lower bounds on α for a wide range of values of β; specifically, α ≥ (1 + β)/e when β ≤ 1/(e − 1), which is tight, and α ≥ 0.648 when β = 1, which improves on a bound of around 0.635 due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of 1/e for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an … A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an $α$ fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional $βn$ samples for some $β\geq 0$. We first give improved lower bounds on $α$ for a wide range of values of $β$; specifically, $α\geq(1+β)/e$ when $β\leq 1/(e-1)$, which is tight, and $α\geq 0.648$ when $β=1$, which improves on a bound of around $0.635$ due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of $1/e$ for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017]. The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017]. The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
Ideally the properties of an economic mechanism should hold in a robust way across multiple equilibria and under varying assumptions regarding the information available to participants.Focusing on the design of … Ideally the properties of an economic mechanism should hold in a robust way across multiple equilibria and under varying assumptions regarding the information available to participants.Focusing on the design of robust position auctions we seek mechanisms that possess an efficient equilibrium, and guarantee high revenue in every efficient equilibrium, under both complete and incomplete information.A generalized first-price auction that is expressive in the sense of allowing multi-dimensional bids turns out to be the only standard design able to achieve this goal, even when valuations are one-dimensional.The equilibria under complete information are obtained via Bernheim and Whinston's profit-target strategies, those under incomplete information via an appropriate generalization thereof.Particularly interesting from a technical perspective is the incomplete-information case, where the standard technique for establishing equilibrium existence due to Myerson is generalized to a setting in which the bid space has higher dimension than the valuation space.
We consider the problem of selling a single item to one of $n$ bidders who arrive sequentially with values drawn independently from identical distributions, and ask how much more revenue … We consider the problem of selling a single item to one of $n$ bidders who arrive sequentially with values drawn independently from identical distributions, and ask how much more revenue can be obtained by posting discriminatory prices to individual bidders rather than the same anonymous price to all of them. The ratio between the maximum revenue from discriminatory pricing and that from anonymous pricing is at most $2-1/n$ for arbitrary distributions and at most $1/(1-(1-1/n)^n)\leq e/(e-1)\approx 1.582$ for regular distributions, and these bounds can in fact be obtained by using one of the discriminatory prices as an anonymous one. The bounds are shown via a relaxation of the discriminatory pricing problem rather than virtual values and thus apply to distributions without a density, and they are tight for all values of $n$. For a class of distributions that includes the uniform and the exponential distribution we show the maximization of revenue to be equivalent to the maximization of welfare with an additional bidder, in the sense that both use the same discriminatory prices. The problem of welfare maximization is the well-known Cayley-Moser problem, and this connection can be used to establish that the revenue gap between discriminatory and anonymous pricing is approximately $1.037$ for the uniform distribution and approximately $1.073$ for the exponential distribution.
We consider the problem of maximizing the expected revenue from selling $k$ homogeneous goods to $n$ unit-demand buyers who arrive sequentially with independent and identically distributed valuations. In this setting … We consider the problem of maximizing the expected revenue from selling $k$ homogeneous goods to $n$ unit-demand buyers who arrive sequentially with independent and identically distributed valuations. In this setting the optimal posted prices are dynamic in the sense that they depend on the remaining numbers of goods and buyers. We investigate how much revenue is lost when a single static price is used instead for all buyers and goods, and prove upper bounds on the ratio between the maximum revenue from dynamic prices and that from static prices. These bounds are tight for all values of $k$ and $n$ and vary depending on a regularity property of the underlying distribution. For general distributions we obtain a ratio of $2-k/n$, for regular distributions a ratio that increases in $n$ and is bounded from above by $1/(1-k^k/(e^{k}k!))$, which is roughly $1/(1-1/(\sqrt{2πk}))$. The lower bounds hold for the revenue gap between dynamic and static prices. The upper bounds are obtained via an ex-ante relaxation of the revenue maximization problem, as a consequence the tight bounds of $2-k/n$ in the general case and of $1/(1-1/(\sqrt{2πk}))$ in the regular case apply also to the potentially larger revenue gap between the optimal incentive-compatible mechanism and the optimal static price. Our results imply that for regular distributions the benefit of dynamic prices vanishes while for non-regular distributions dynamic prices may achieve up to twice the revenue of static prices.
In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of … In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of minimizing expected ex post regret, we are able to adapt statistical machine learning techniques to the design of payment rules. This computational approach to mechanism design is applicable to domains with multi-dimensional types and situations where computational efficiency is a concern. Specifically, given an outcome rule and access to a type distribution, we train a support vector machine with a specific structure imposed on the discriminant function, such that it implicitly learns a corresponding payment rule with desirable incentive properties. We extend the framework to adopt succinct k -wise dependent valuations, leveraging a connection with maximum a posteriori assignment on Markov networks to enable training to scale up to settings with a large number of items; we evaluate this construction in the case where k=2. We present applications to multiparameter combinatorial auctions with approximate winner determination, and the assignment problem with an egalitarian outcome rule. Experimental results demonstrate that the construction produces payment rules with low ex post regret, and that penalizing classification error is effective in preventing failures of ex post individual rationality.
We study the problem of selecting a member of a set of agents based on impartial nominations by agents from that set. The problem was studied previously by Alon et … We study the problem of selecting a member of a set of agents based on impartial nominations by agents from that set. The problem was studied previously by Alon et al. and by Holzman and Moulin and has important applications in situations where representatives are selected from within a group or where publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. Subject to impartiality, this is best possible.
It is desirable for an economic mechanism that its properties hold in a robust way across multiple equilibria and under varying assumptions regarding the information available to the participants. In … It is desirable for an economic mechanism that its properties hold in a robust way across multiple equilibria and under varying assumptions regarding the information available to the participants. In this paper we focus on the design of position auctions and seek mechanisms that guarantee high revenue in every efficient equilibrium under both complete and incomplete information. Our main result identifies a generalized first-price auction with multi-dimensional bids as the only standard design capable of achieving this goal, even though valuations are one-dimensional. The fact that expressiveness beyond the valuation space is necessary for robustness provides an interesting counterpoint to previous work, which has highlighted the benefits of simple bid spaces. From a technical perspective, our results are interesting because they establish equilibrium existence for a multi-dimensional bid space, where standard techniques for establishing equilibrium existence break down.
We study the problem of selecting a member of a set of agents based on impartial nominations by agents from that set. The problem was studied previously by Alon et … We study the problem of selecting a member of a set of agents based on impartial nominations by agents from that set. The problem was studied previously by Alon et al. and Holzman and Moulin and has important applications in situations where representatives are selected from within a group or where publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation awards the prize to an agent with at least half the maximum number of nominations. Subject to impartiality, this is best possible.
Since economic mechanisms are often applied to very different instances of the same problem, it is desirable to identify mechanisms that work well in a wide range of circumstances. We … Since economic mechanisms are often applied to very different instances of the same problem, it is desirable to identify mechanisms that work well in a wide range of circumstances. We pursue this goal for a position auction setting and specifically seek mechanisms that guarantee good outcomes under both complete and incomplete information. A variant of the generalized first-price mechanism with multi-dimensional bids turns out to be the only standard mechanism able to achieve this goal, even when types are one-dimensional. The fact that expressiveness beyond the type space is both necessary and sufficient for this kind of robustness provides an interesting counterpoint to previous work on position auctions that has highlighted the benefits of simplicity. From a technical perspective our results are interesting because they establish equilibrium existence for a multi-dimensional bid space, where standard techniques break down. The structure of the equilibrium bids moreover provides an intuitive explanation for why first-price payments may be able to support equilibria in a wider range of circumstances than second-price payments.
In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of … In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of minimizing expected ex post regret, we are able to adapt statistical machine learning techniques to the design of payment rules. This computational approach to mechanism design is applicable to domains with multi-dimensional types and situations where computational efficiency is a concern. Specifically, given an outcome rule and access to a type distribution, we train a support vector machine with a special discriminant function structure such that it implicitly establishes a payment rule with desirable incentive properties. We discuss applications to a multi-minded combinatorial auction with a greedy winner-determination algorithm and to an assignment problem with egalitarian outcome rule. Experimental results demonstrate both that the construction produces payment rules with low ex post regret, and that penalizing classification errors is effective in preventing failures of ex post individual rationality.
A fundamental result in mechanism design theory, the so-called revelation principle, asserts that for many questions concerning the existence of mechanisms with a given outcome one can restrict attention to … A fundamental result in mechanism design theory, the so-called revelation principle, asserts that for many questions concerning the existence of mechanisms with a given outcome one can restrict attention to truthful direct-revelation mechanisms. In practice, however, many mechanisms use a restricted message space. This motivates the study of the tradeoffs involved in choosing simplified mechanisms, which can sometimes bring benefits in precluding bad or promoting good equilibria, and other times impose costs on welfare and revenue. We study the simplicity-expressiveness tradeoff in two representative settings, sponsored search auctions and combinatorial auctions, each being a canonical example for complete information and incomplete information analysis, respectively. We observe that the amount of information available to the agents plays an important role for the tradeoff between simplicity and expressiveness.
A fundamental result in mechanism design theory, the so-called revelation principle, asserts that for many questions concerning the existence of mechanisms with a given outcome one can restrict attention to … A fundamental result in mechanism design theory, the so-called revelation principle, asserts that for many questions concerning the existence of mechanisms with a given outcome one can restrict attention to truthful direct revelation-mechanisms. In practice, however, many mechanism use a restricted message space. This motivates the study of the tradeoffs involved in choosing simplified mechanisms, which can sometimes bring benefits in precluding bad or promoting good equilibria, and other times impose costs on welfare and revenue. We study the simplicity-expressiveness tradeoff in two representative settings, sponsored search auctions and combinatorial auctions, each being a canonical example for complete information and incomplete information analysis, respectively. We observe that the amount of information available to the agents plays an important role for the tradeoff between simplicity and expressiveness.
We study computational problems arising from the iterated removal of weakly dominated actions in anonymous games. Our main result shows that it is NP-complete to decide whether an anonymous game … We study computational problems arising from the iterated removal of weakly dominated actions in anonymous games. Our main result shows that it is NP-complete to decide whether an anonymous game with three actions can be solved via iterated weak dominance. The two-action case can be reformulated as a natural elimination problem on a matrix, the complexity of which turns out to be surprisingly difficult to characterize and ultimately remains open. We however establish connections to a matching problem along paths in a directed graph, which is computationally hard in general but can also be used to identify tractable cases of matrix elimination. We finally identify different classes of anonymous games where iterated dominance is in P and NP-complete, respectively.
Consider a matching problem on a graph where disjoint sets of vertices are privately owned by self-interested agents. An edge between a pair of vertices indicates compatibility and allows the … Consider a matching problem on a graph where disjoint sets of vertices are privately owned by self-interested agents. An edge between a pair of vertices indicates compatibility and allows the vertices to match. We seek a mechanism to maximize the number of matches despite self-interest, with agents that each want to maximize the number of their own vertices that match. Each agent can choose to hide some of its vertices, and then privately match the hidden vertices with any of its own vertices that go unmatched by the mechanism. A prominent application of this model is to kidney exchange, where agents correspond to hospitals and vertices to donor-patient pairs. Here hospitals may game an exchange by holding back pairs and harm social welfare. In this paper we seek to design mechanisms that are strategyproof, in the sense that agents cannot benefit from hiding vertices, and approximately maximize efficiency, i.e., produce a matching that is close in cardinality to the maximum cardinality matching. Our main result is the design and analysis of the eponymous Mix-and-Match mechanism; we show that this randomized mechanism is strategyproof and provides a 2-approximation. Lower bounds establish that the mechanism is near optimal.
A common thread in the social sciences is to identify the “most desirable” elements of a set of alternatives according to some binary dominance relation. Examples can be found in … A common thread in the social sciences is to identify the “most desirable” elements of a set of alternatives according to some binary dominance relation. Examples can be found in areas as diverse as voting theory, game theory, and argumentation theory. Brandt and Fischer [BF08] proved that MCu-MEMBER, the problem of deciding whether an alternative is contained in some inclusion-minimal upward covering set—a subset of alternatives that satisfies certain notions of internal and external stability—is NP-hard. We raise their NPhardness lower bound to the Θ 2 level of the polynomial hierarchy and provide a Σ p 2 upper bound. Relatedly, we show that other problems regarding minimal upward covering sets, such as deciding whether the set of all alternatives is a minimal upward covering set, are coNP-hard. As a consequence, minimal upward covering sets cannot be found in polynomial time unless P = NP.
We consider directed graphs over a set of n agents, where an edge (i,j) is taken to mean that agent i supports or trusts agent j. Given such a graph … We consider directed graphs over a set of n agents, where an edge (i,j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k\leq n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted agents. At the same time we assume that each individual agent is only interested in being selected, and may misreport its outgoing edges to this end. This problem formulation captures realistic scenarios where agents choose among themselves, which can be found in the context of Internet search, social networks like Twitter, or reputation systems like Epinions. Our goal is to design mechanisms without payments that map each graph to a k-subset of agents to be selected and satisfy the following two constraints: strategyproofness, i.e., agents cannot benefit from misreporting their outgoing edges, and approximate optimality, i.e., the sum of indegrees of the selected subset of agents is always close to optimal. Our first main result is a surprising impossibility: for k \in {1,...,n-1}, no deterministic strategyproof mechanism can provide a finite approximation ratio. Our second main result is a randomized strategyproof mechanism with an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.
A recurring theme in the mathematical social sciences is how to select the desirable elements given a binary dominance relation on a set of alternatives. Schwartz's tournament equilibrium set (TEQ) … A recurring theme in the mathematical social sciences is how to select the desirable elements given a binary dominance relation on a set of alternatives. Schwartz's tournament equilibrium set (TEQ) ranks among the most intriguing, but also among the most enigmatic, tournament solutions that have been proposed so far in this context. Due to its unwieldy recursive definition, little is known about TEQ. In particular, its monotonicity remains an open problem up to date. Yet, if TEQ were to satisfy monotonicity, it would be a very attractive tournament solution concept refining both the Banks set and Dutta's minimal covering set. We show that the problem of deciding whether a given alternative is contained in TEQ is NP-hard.

Commonly Cited References

Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their … Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e ≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be … We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints (i.e., that the total allocation of each item should not exceed the supply). Our framework is general in the sense that it makes no direct assumption about agents' valuations, type distributions, or single agent constraints (e.g., budget, incentive compatibility, etc.). We present two generic multiagent mechanisms which use single agent mechanisms as black boxes. If an $\alpha$-approximate single agent mechanism is available for each agent, and assuming no agent ever demands more than $\frac{1}{k}$ of all units of each item, our generic multiagent mechanisms are $\gamma_{k}\alpha$-approximations of the optimal multiagent mechanism, where $\gamma_{k}$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. As a byproduct of our construction, we present a generalization of prophet inequalities where both gambler and prophet are allowed to pick $k$ numbers each to receive a reward equal to their sum. Finally, we use our framework to obtain multiagent mechanisms with improved approximation factor for several settings from the literature.
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and … Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and if $T_n$ is the set of stop rules for $X_1, \cdots, X_n$, then $E(\max\{X_1, \cdots, X_n\}) \leq a_n \sup\{EX_t: t \in T_n\}$, and the bound $a_n$ is best possible. Similar universal constants $0 < b_n < \frac{1}{4}$ are found so that if the $\{X_i\}$ are i.i.d. random variables taking values only in $\lbrack a, b\rbrack$, then $E(\max\{X_1, \cdots, X_n\}) \leq \sup\{EX_t: t \in T_n\} + b_n(b - a)$, where again the bound $b_n$ is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold … We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms, and is used to derive new and improved results for combinatorial markets (with and without complements), multi-dimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items … The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [33] and Feldman et al. [17] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1 – 1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [45] and Esfandiari et al. [15] who worked in the special cases where we can fully control the arrival order or when there is only a single item.Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters' preferences becomes a Condorcet winner—a candidate who beats … In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters' preferences becomes a Condorcet winner—a candidate who beats all other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower bound—NP-hardness—on the computational complexity of determining the election winner in Carroll's system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll's system is complete for parallel access to NP, that is, it is complete for Theta_ 2 p for which it becomes the most natural complete problem known. It follows that determining the winner in Carroll's elections is not NP-complete unless the polynomial hierarchy collapses.
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both … We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both problems. This is close to the Omega(n/log n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log(n) log(r))-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(loglog(n) log(r)) factor) an open question posed to us by Bobby Kleinberg.
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined … Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.2082 - 2095Chapter DOI:https://doi.org/10.1137/1.9781611975994.128PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
Abstract Develops a structuralist understanding of mathematics, as an alternative to set‐ or type‐theoretic foundations, that respects classical mathematical truth while minimizing Platonist commitments to abstract entities. Modal logic is … Abstract Develops a structuralist understanding of mathematics, as an alternative to set‐ or type‐theoretic foundations, that respects classical mathematical truth while minimizing Platonist commitments to abstract entities. Modal logic is combined with notions of part/whole (mereology) enabling a systematic interpretation of ordinary mathematical statements as asserting what would be the case in any (suitable) structure there (logically) might be, e.g. for number theory, functional analysis, algebra, pure geometry, etc. Structures are understood as comprising objects, whatever their nature, standing in suitable relations as given by axioms or defining conditions in mathematics proper. The characterization of structures is aided by the addition of plural quantifiers, e.g. ‘Any objects of sort F’ corresponding to arbitrary collections of Fs, achieving the expressive power of second‐order logic, hence a full logic of relations. (See the author's ‘Structuralism without Structures’, Philosophia Mathematica4 (1996): 100–123.) Claims of absolute existence of structures are replaced by claims of (logical) possibility of enough structurally interrelated objects (modal‐existence postulates). The vast bulk of ordinary mathematics, and scientific applications, can thus be recovered on the basis of the possibility of a countable infinity of atoms. As applied to set theory itself, these ideas lead to a ‘many worlds’—– as opposed to the standard ‘fixed universe’—view, inspired by Zermelo (1930), respecting the unrestricted, indefinite extendability of models of the Zermelo–Fraenkel axioms. Natural motivation for (‘small’) large cardinal axioms is thus provided. In sum, the vast bulk of abstract mathematics is respected as objective, while literal reference to abstracta and related problems with Platonism are eliminated.
Abstract Electoral control refers to attempts by an election's organizer (“the chair”) to influence the outcome by adding/deleting/partitioning voters or candidates. The important paper of Bartholdi, Tovey, and Trick [1] … Abstract Electoral control refers to attempts by an election's organizer (“the chair”) to influence the outcome by adding/deleting/partitioning voters or candidates. The important paper of Bartholdi, Tovey, and Trick [1] that introduces (constructive) control proposes computational complexity as a means of resisting control attempts: Look for election systems where the chair's task in seeking control is itself computationally infeasible. We introduce and study a method of combining two or more candidate‐anonymous election schemes in such a way that the combined scheme possesses all the resistances to control (i.e., all the NP‐hardnesses of control) possessed by any of its constituents: It combines their strengths. From this and new resistance constructions, we prove for the first time that there exists a neutral, anonymous election scheme (whose winner problem is computable in polynomial time) that is resistant to all twenty standard types of electoral control (© 2009 WILEY‐VCH Verlag GmbH &amp; Co. KGaA, Weinheim)
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider … The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
A major achievement of mechanism design theory is a general method for the construction of truthful mechanisms called VCG (Vickrey, Clarke, Groves). When applying this method to complex problems such … A major achievement of mechanism design theory is a general method for the construction of truthful mechanisms called VCG (Vickrey, Clarke, Groves). When applying this method to complex problems such as combinatorial auctions, a difficulty arises: VCG mechanisms are required to compute optimal outcomes and are, therefore, computationally infeasible. However, if the optimal outcome is replaced by the results of a sub-optimal algorithm, the resulting mechanism (termed VCG-based) is no longer necessarily truthful. The first part of this paper studies this phenomenon in depth and shows that it is near universal. Specifically, we prove that essentially all reasonable approximations or heuristics for combinatorial auctions as well as a wide class of cost minimization problems yield non-truthful VCG-based mechanisms. We generalize these results for affine maximizers. The second part of this paper proposes a general method for circumventing the above problem. We introduce a modification of VCG-based mechanisms in which the agents are given a chance to improve the output of the underlying algorithm. When the agents behave truthfully, the welfare obtained by the mechanism is at least as good as the one obtained by the algorithm's output. We provide a strong rationale for truth-telling behavior. Our method satisfies individual rationality as well.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is … In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
The Ramsey number <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r Subscript k Baseline left-parenthesis s comma n right-parenthesis"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mi>k</mml:mi> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> … The Ramsey number <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r Subscript k Baseline left-parenthesis s comma n right-parenthesis"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mi>k</mml:mi> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">r_k(s,n)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is the minimum <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper N"> <mml:semantics> <mml:mi>N</mml:mi> <mml:annotation encoding="application/x-tex">N</mml:annotation> </mml:semantics> </mml:math> </inline-formula> such that every red-blue coloring of the <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="k"> <mml:semantics> <mml:mi>k</mml:mi> <mml:annotation encoding="application/x-tex">k</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-tuples of an <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper N"> <mml:semantics> <mml:mi>N</mml:mi> <mml:annotation encoding="application/x-tex">N</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-element set contains a red set of size <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="s"> <mml:semantics> <mml:mi>s</mml:mi> <mml:annotation encoding="application/x-tex">s</mml:annotation> </mml:semantics> </mml:math> </inline-formula> or a blue set of size <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n"> <mml:semantics> <mml:mi>n</mml:mi> <mml:annotation encoding="application/x-tex">n</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, where a set is called red (blue) if all <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="k"> <mml:semantics> <mml:mi>k</mml:mi> <mml:annotation encoding="application/x-tex">k</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-tuples from this set are red (blue). In this paper we obtain new estimates for several basic hypergraph Ramsey problems. We give a new upper bound for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r Subscript k Baseline left-parenthesis s comma n right-parenthesis"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mi>k</mml:mi> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">r_k(s,n)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="k greater-than-or-equal-to 3"> <mml:semantics> <mml:mrow> <mml:mi>k</mml:mi> <mml:mo>≥<!-- ≥ --></mml:mo> <mml:mn>3</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">k \geq 3</mml:annotation> </mml:semantics> </mml:math> </inline-formula> and <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="s"> <mml:semantics> <mml:mi>s</mml:mi> <mml:annotation encoding="application/x-tex">s</mml:annotation> </mml:semantics> </mml:math> </inline-formula> fixed. In particular, we show that <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r 3 left-parenthesis s comma n right-parenthesis less-than-or-equal-to 2 Superscript n Super Superscript s minus 2 Superscript log n Baseline comma"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>≤<!-- ≤ --></mml:mo> <mml:msup> <mml:mn>2</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>s</mml:mi> <mml:mo>−<!-- − --></mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> <mml:mi>log</mml:mi> <mml:mo>⁡<!-- ⁡ --></mml:mo> <mml:mi>n</mml:mi> </mml:mrow> </mml:msup> <mml:mo>,</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">r_3(s,n) \leq 2^{n^{s-2}\log n},</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> which improves by a factor of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n Superscript s minus 2 Baseline slash polylog n"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>s</mml:mi> <mml:mo>−<!-- − --></mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mtext>polylog</mml:mtext> </mml:mrow> <mml:mspace width="thinmathspace" /> <mml:mi>n</mml:mi> </mml:mrow> <mml:annotation encoding="application/x-tex">n^{s-2}/\textrm {polylog}\,n</mml:annotation> </mml:semantics> </mml:math> </inline-formula> the exponent of the previous upper bound of Erdős and Rado from 1952. We also obtain a new lower bound for these numbers, showing that there is a constant <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="c greater-than 0"> <mml:semantics> <mml:mrow> <mml:mi>c</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>0</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">c&gt;0</mml:annotation> </mml:semantics> </mml:math> </inline-formula> such that <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r 3 left-parenthesis s comma n right-parenthesis greater-than-or-equal-to 2 Superscript c s n log left-parenthesis StartFraction n Over s EndFraction plus 1 right-parenthesis"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>≥<!-- ≥ --></mml:mo> <mml:msup> <mml:mn>2</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>c</mml:mi> <mml:mspace width="thinmathspace" /> <mml:mi>s</mml:mi> <mml:mi>n</mml:mi> <mml:mspace width="thinmathspace" /> <mml:mi>log</mml:mi> <mml:mo>⁡<!-- ⁡ --></mml:mo> <mml:mo stretchy="false">(</mml:mo> <mml:mfrac> <mml:mi>n</mml:mi> <mml:mi>s</mml:mi> </mml:mfrac> <mml:mo>+</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">r_3(s,n) \geq 2^{c \, sn \, \log (\frac {n}{s}+1)}</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> for all <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="4 less-than-or-equal-to s less-than-or-equal-to n"> <mml:semantics> <mml:mrow> <mml:mn>4</mml:mn> <mml:mo>≤<!-- ≤ --></mml:mo> <mml:mi>s</mml:mi> <mml:mo>≤<!-- ≤ --></mml:mo> <mml:mi>n</mml:mi> </mml:mrow> <mml:annotation encoding="application/x-tex">4 \leq s \leq n</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. For constant <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="s"> <mml:semantics> <mml:mi>s</mml:mi> <mml:annotation encoding="application/x-tex">s</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, this gives the first superexponential lower bound for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r 3 left-parenthesis s comma n right-parenthesis"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">r_3(s,n)</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, answering an open question posed by Erdős and Hajnal in 1972. Next, we consider the <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="3"> <mml:semantics> <mml:mn>3</mml:mn> <mml:annotation encoding="application/x-tex">3</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-color Ramsey number <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r 3 left-parenthesis n comma n comma n right-parenthesis"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>n</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">r_3(n,n,n)</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, which is the minimum <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper N"> <mml:semantics> <mml:mi>N</mml:mi> <mml:annotation encoding="application/x-tex">N</mml:annotation> </mml:semantics> </mml:math> </inline-formula> such that every <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="3"> <mml:semantics> <mml:mn>3</mml:mn> <mml:annotation encoding="application/x-tex">3</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-coloring of the triples of an <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper N"> <mml:semantics> <mml:mi>N</mml:mi> <mml:annotation encoding="application/x-tex">N</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-element set contains a monochromatic set of size <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n"> <mml:semantics> <mml:mi>n</mml:mi> <mml:annotation encoding="application/x-tex">n</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. Improving another old result of Erdős and Hajnal, we show that <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="r 3 left-parenthesis n comma n comma n right-parenthesis greater-than-or-equal-to 2 Superscript n Super Superscript c log n Superscript Baseline period"> <mml:semantics> <mml:mrow> <mml:msub> <mml:mi>r</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">(</mml:mo> <mml:mi>n</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo>,</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>≥<!-- ≥ --></mml:mo> <mml:msup> <mml:mn>2</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>c</mml:mi> <mml:mi>log</mml:mi> <mml:mo>⁡<!-- ⁡ --></mml:mo> <mml:mi>n</mml:mi> </mml:mrow> </mml:msup> </mml:mrow> </mml:msup> <mml:mo>.</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">r_3(n,n,n) \geq 2^{n^{c \log n}}.</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> Finally, we make some progress on related hypergraph Ramsey-type problems.
A recurring theme in the mathematical social sciences is how to select the desirable elements given a binary dominance relation on a set of alternatives. Schwartz's tournament equilibrium set (TEQ) … A recurring theme in the mathematical social sciences is how to select the desirable elements given a binary dominance relation on a set of alternatives. Schwartz's tournament equilibrium set (TEQ) ranks among the most intriguing, but also among the most enigmatic, tournament solutions that have been proposed so far in this context. Due to its unwieldy recursive definition, little is known about TEQ. In particular, its monotonicity remains an open problem up to date. Yet, if TEQ were to satisfy monotonicity, it would be a very attractive tournament solution concept refining both the Banks set and Dutta's minimal covering set. We show that the problem of deciding whether a given alternative is contained in TEQ is NP-hard.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
The setting of the classic prophet inequality is as follows: a gambler is shown the probability distributions of $n$ independent, non-negative random variables with finite expectations. In their indexed order, … The setting of the classic prophet inequality is as follows: a gambler is shown the probability distributions of $n$ independent, non-negative random variables with finite expectations. In their indexed order, a value is drawn from each distribution, and after every draw the gambler may choose to accept the value and end the game, or discard the value permanently and continue the game. What is the best performance that the gambler can achieve in comparison to a prophet who can always choose the highest value? Krengel, Sucheston, and Garling solved this problem in 1978, showing that there exists a strategy for which the gambler can achieve half as much reward as the prophet in expectation. Furthermore, this result is tight. In this work, we consider a setting in which the gambler is allowed much less information. Suppose that the gambler can only take one sample from each of the distributions before playing the game, instead of knowing the full distributions. We provide a simple and intuitive algorithm that recovers the original approximation of $\frac{1}{2}$. Our algorithm works against even an almighty adversary who always chooses a worst-case ordering, rather than the standard offline adversary. The result also has implications for mechanism design -- there is much interest in designing competitive auctions with a finite number of samples from value distributions rather than full distributional knowledge.
Algorithms for clustering points in metric spaces is a long-studied area of research. Clustering has seen a multitude of work both theoretically, in understanding the approximation guarantees possible for many … Algorithms for clustering points in metric spaces is a long-studied area of research. Clustering has seen a multitude of work both theoretically, in understanding the approximation guarantees possible for many objective functions such as k-median and k-means clustering, and experimentally, in finding the fastest algorithms and seeding procedures for Lloyd's algorithm. The performance of a given clustering algorithm depends on the specific application at hand, and this may not be known up front. For example, a typical instance may vary depending on the application, and different clustering heuristics perform differently depending on the instance. In this paper, we define an infinite family of algorithms generalizing Lloyd's algorithm, with one parameter controlling the the initialization procedure, and another parameter controlling the local search procedure. This family of algorithms includes the celebrated k-means++ algorithm, as well as the classic farthest-first traversal algorithm. We design efficient learning algorithms which receive samples from an application-specific distribution over clustering instances and learn a near-optimal clustering algorithm from the class. We show the best parameters vary significantly across datasets such as MNIST, CIFAR, and mixtures of Gaussians. Our learned algorithms never perform worse than k-means++, and on some datasets we see significant improvements.
This paper is devoted, in the main, to proving the asymptotic minimax character of the sample distribution function (d.f.) for estimating an unknown d.f. in $\mathscr{F}$ or $\mathscr{F}_c$ (defined in … This paper is devoted, in the main, to proving the asymptotic minimax character of the sample distribution function (d.f.) for estimating an unknown d.f. in $\mathscr{F}$ or $\mathscr{F}_c$ (defined in Section 1) for a wide variety of weight functions. Section 1 contains definitions and a discussion of measurability considerations. Lemma 2 of Section 2 is an essential tool in our proofs and seems to be of interest per se; for example, it implies the convergence of the moment generating function of $G_n$ to that of $G$ (definitions in (2.1)). In Section 3 the asymptotic minimax character is proved for a fundamental class of weight functions which are functions of the maximum deviation between estimating and true d.f. In Section 4 a device (of more general applicability in decision theory) is employed which yields the asymptotic minimax result for a wide class of weight functions of this character as a consequence of the results of Section 3 for weight functions of the fundamental class. In Section 5 the asymptotic minimax character is proved for a class of integrated weight functions. A more general class of weight functions for which the asymptotic minimax character holds is discussed in Section 6. This includes weight functions for which the risk function of the sample d.f. is not a constant over $\mathscr{F}_c.$ Most weight functions of practical interest are included in the considerations of Sections 3 to 6. Section 6 also includes a discussion of multinomial estimation problems for which the asymptotic minimax character of the classical estimator is contained in our results. Finally, Section 7 includes a general discussion of minimization of symmetric convex or monotone functionals of symmetric random elements, with special consideration of the "tied-down" Wiener process, and with a heuristic proof of the results of Sections 3, 4, 5, and much of Section 6.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
A crucial problem in modern data science is data-driven algorithm design, where the goal is to choose the best algorithm, or algorithm parameters, for a specific application domain. In practice, … A crucial problem in modern data science is data-driven algorithm design, where the goal is to choose the best algorithm, or algorithm parameters, for a specific application domain. In practice, we often optimize over a parametric algorithm family, searching for parameters with high performance on a collection of typical problem instances. While effective in practice, these procedures generally have not come with provable guarantees. A recent line of work initiated by a seminal paper of Gupta and Roughgarden (2017) analyzes application-specific algorithm selection from a theoretical perspective. We progress this research direction in several important settings. We provide upper and lower bounds on regret for algorithm selection in online settings, where problems arrive sequentially and we must choose parameters online. We also consider differentially private algorithm selection, where the goal is to find good parameters for a set of problems without divulging too much sensitive information contained therein. We analyze several important parameterized families of algorithms, including SDP-rounding schemes for problems formulated as integer quadratic programs as well as greedy techniques for several canonical subset selection problems. The cost function that measures an algorithm's performance is often a volatile piecewise Lipschitz function of its parameters, since a small change to the parameters can lead to a cascade of different decisions made by the algorithm. We present general techniques for optimizing the sum or average of piecewise Lipschitz functions when the underlying functions satisfy a sufficient and general condition called dispersion. Intuitively, a set of piecewise Lipschitz functions is dispersed if no small region contains many of the functions' discontinuities. Using dispersion, we improve over the best-known online learning regret bounds for a variety problems, prove regret bounds for problems not previously studied, and provide matching regret lower bounds. In the private optimization setting, we show how to optimize performance while preserving privacy for several important problems, providing matching upper and lower bounds on performance loss due to privacy preservation. Though algorithm selection is our primary motivation, we believe the notion of dispersion may be of independent interest. Therefore, we present our results for the more general problem of optimizing piecewise Lipschitz functions. Finally, we uncover dispersion in domains beyond algorithm selection, namely, auction design and pricing, providing online and privacy guarantees for these problems as well.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
The procedure for allocating time on telescopes is horribly onerous for those unfortunate astronomers who serve on the committees that administer the process; it is also in danger of complete … The procedure for allocating time on telescopes is horribly onerous for those unfortunate astronomers who serve on the committees that administer the process; it is also in danger of complete collapse as the number of applications steadily increases. Here an alternative is presented, whereby the task is distributed around the astronomical community in such a way as to steer the outcome towards awarding this precious resource to those projects where there is a community consensus that the science is most exciting and innovative.
Complements between goods--where one good takes on added value in the presence of another--have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are … Complements between goods--where one good takes on added value in the presence of another--have been a thorn in the side of algorithmic mechanism designers. On the one hand, complements are common in the standard motivating applications for combinatorial auctions, like spectrum license auctions. On the other, welfare maximization in the presence of complements is notoriously difficult, and this intractability has stymied theoretical progress in the area. For example, there are no known positive results for combinatorial auctions in which bidder valuations are multi-parameter and non-complement-free, other than the relatively weak results known for general valuations.
A method of improved efficiency is given for updating the mean and variance of weighted sampled data when an additional data value is included in the set. Evidence is presented … A method of improved efficiency is given for updating the mean and variance of weighted sampled data when an additional data value is included in the set. Evidence is presented that the method is stable and at least as accurate as the best existing updating method.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots.
We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main … We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O ( n (1 + log( N/n ))) expected time, which is optimum, up to a constant factor. Several optimizations are studied that collectively improve the speed of the naive version of the algorithm by an order of magnitude. We give an efficient Pascal-like implementation that incorporates these modifications and that is suitable for general use. Theoretical and empirical results indicate that Algorithm Z outperforms current methods by a significant margin.
For both the edge deletion heuristic and the maximum-degree greedy heuristic, we study the problem of recognizing those graphs for which that heuristic can approximate the size of a minimum … For both the edge deletion heuristic and the maximum-degree greedy heuristic, we study the problem of recognizing those graphs for which that heuristic can approximate the size of a minimum vertex cover within a constant factor of r, where r is a fixed rational number. Our main results are that these problems are complete for the class of problems solvable via parallel access to NP. To achieve these main results, we also show that the restriction of the vertex cover problem to those graphs for which either of these heuristics can find an optimal solution remains NP-hard.
The promise of data science is that if data from a system can be recorded and understood then this understanding can potentially be utilized to improve the system. Behavioral and … The promise of data science is that if data from a system can be recorded and understood then this understanding can potentially be utilized to improve the system. Behavioral and economic data, however, is different from scientific data in that it is subjective to the system. Behavior changes when the system changes, and to predict behavior for any given system change or to optimize over system changes, the behavioral model that generates the data must be inferred from the data. The ease with which this inference can be performed generally also depends on the system. Trivially, a system that ignores behavior does not admit any inference of a behavior generating model that can be used to predict behavior in a system that is responsive to behavior.
We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction … We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction design literature as it requires no probabilistic assumptions on buyers' valuations and employs the framework of competitive analysis. Our objective is to optimize the worst-case performance of an auction, measured by the ratio between a given benchmark and revenue generated by the auction.
Let $\mathbf{Y} = (Y_1, \cdots, Y_n)$ be random variables satisfying the weak negative dependence condition: $P(Y_i < a_i\mid Y_1 < a_1, \cdots, Y_{i-1}) \leq P(Y_i < a_i)$ for $i = … Let $\mathbf{Y} = (Y_1, \cdots, Y_n)$ be random variables satisfying the weak negative dependence condition: $P(Y_i < a_i\mid Y_1 < a_1, \cdots, Y_{i-1}) \leq P(Y_i < a_i)$ for $i = 2, \cdots, n$ and all constants $a_1, \cdots, a_n$. Let $\mathbf{X} = (X_1, \cdots, X_n)$ have independent components, where $X_i$ and $Y_i$ have the same marginal distribution, $i = 1, \cdots, n$. It is shown that $V(\mathbf{X}) \leq V(\mathbf{Y})$, where $V(\mathbf{Y}) = \sup \{EY_t: t \text{is a stopping rule for} Y_1,\cdots, Y_n\}$. Also, the classical inequality which for nonnegative variables compares the expected return of a prophet $E\{Y_1 \vee \cdots \vee Y_n\}$ with that of the statistician $V(\mathbf{Y})$, i.e., $E\{Y_1 \vee \cdots \vee Y_n\} < 2V(\mathbf{Y})$, holds for nonnegative $\mathbf{Y}$ satisfying the negative dependence condition. Moreover, this inequality can be obtained by an explicitly described threshold rule $t(b)$, i.e., $E\{Y_1 \vee \cdots \vee Y_n\} < 2EY_{t(b)}$. Generalizations of this prophet inequality are given. Extensions of the results to infinite sequences are obtained.
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by … The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
We study a fundamental problem in social choice theory, the selection of a member of a set of agents based on impartial nominations by agents from that set. Studied previously … We study a fundamental problem in social choice theory, the selection of a member of a set of agents based on impartial nominations by agents from that set. Studied previously by Alon et al. [Proceedings of TARK, 2011, pp. 101--110] and by Holzman and Moulin [Econometrica, 81 (2013), pp. 173--196], this problem arises when representatives are selected from within a group or when publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. This is best possible subject to impartiality and resolves a conjecture of Alon et al. Further results are given for the case where some agent receives many nominations and the case where each agent casts at least one nomination.