Author Description

Login to generate an author description

Ask a Question About This Mathematician

This paper considers two canonical Bayesian mechanism design settings. In the single-item setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always … This paper considers two canonical Bayesian mechanism design settings. In the single-item setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62-fraction of the revenue; (2) there is a matching lower-bound instance.
We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. … We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. There are four widely-used and widely-studied mechanisms in the literature: {\sf Myerson Auction}~({\sf OPT}), {\sf Sequential Posted-Pricing}~({\sf SPM}), {\sf Second-Price Auction with Anonymous Reserve}~({\sf AR}), and {\sf Anonymous Pricing}~({\sf AP}). {\sf OPT} is revenue-optimal but complicated, which also experiences several issues in practice such as fairness; {\sf AP} is the simplest mechanism, but also generates the lowest revenue among these four mechanisms; {\sf SPM} and {\sf AR} are of intermediate complexity and revenue. We explore revenue gaps among these mechanisms, each of which is defined as the largest ratio between revenues from a pair of mechanisms. We establish two tight bounds and one improved bound: 1. {\sf SPM} vs.\ {\sf AP}: this ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of $\mathcal{C^*} \approx 2.62$, closing the gap between $\big[\frac{e}{e - 1}, e\big]$ left before. 2. {\sf AR} vs.\ {\sf AP}: this ratio measures the relative power of auction scheme vs.\ pricing scheme, when no discrimination is allowed. We attain the tight ratio of $\frac{\pi^2}{6} \approx 1.64$, closing the previously known bounds $\big[\frac{e}{e - 1}, e\big]$. 3. {\sf OPT} vs.\ {\sf AR}: this ratio quantifies the power of discrimination in auction schemes, and is previously known to be somewhere between $\big[2, e\big]$. The lower-bound of $2$ was conjectured to be tight by Hartline and Roughgarden (2009) and Alaei et al.\ (2015). We acquire a better lower-bound of $2.15$, and thus disprove this conjecture.
We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. … We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. There are four widely used and widely studied mechanisms in the literature: Myerson Auction (OPT), Sequential Posted-Pricing (SPM), Second-Price Auction with Anonymous Reserve (AR), and Anonymous Pricing (AP). OPT is revenue-optimal but complicated and also experiences several issues in practice such as fairness; AP is the simplest mechanism but also generates the lowest revenue among these four mechanisms; SPM and AR are of intermediate complexity and revenue. We explore revenue gaps among these mechanisms, each of which is defined as the largest ratio between revenues from a pair of mechanisms. We establish two tight bounds and one tighter bound: 1. SPM vs. AP: this ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of constant ${\cal{C}}^* \approx {2.62}$, closing the gap between $[\frac{e}{e - 1}, e]$ left before. 2. AR vs. AP: this ratio measures the relative power of auction scheme vs. pricing scheme, when no discrimination is allowed. We attain the tight ratio of $\frac{\pi^2}{6} \approx 1.64$, closing the previously known bounds $[\frac{e}{e - 1}, e]$. 3. OPT vs. AR: this ratio quantifies the power of discrimination in auction schemes and is previously known to be somewhere between [2, e]. The lower bound of 2 was conjectured to be tight by Hartline and Roughgarden [Proceedings of the 10th ACM Conference on Electronic Commerce, 2009, pp. 225--234] and Alaei et al. [Games Econom. Behav., 118 (2019), pp. 494--510]. We acquire a better lower bound of 2.15 and thus disprove this conjecture.
We consider a fundamental problem in microeconomics: Selling a single item among a number of buyers whose values are drawn from known independent and regular distributions. There are four widely-used … We consider a fundamental problem in microeconomics: Selling a single item among a number of buyers whose values are drawn from known independent and regular distributions. There are four widely-used and widely-studied mechanisms in this literature: Anonymous Posted-Pricing (AP), Second-Price Auction with Anonymous Reserve (AR), Sequential Posted-Pricing (SPM), and Myerson Auction (OPT). Myerson Auction is optimal but complicated, which also suffers a few issues in practice such as fairness; AP is the simplest mechanism, but its revenue is also the lowest among these four; AR and SPM are of intermediate complexity and revenue. We study the revenue gaps among these four mechanisms, which is defined as the largest ratio between revenues from two mechanisms. We establish two tight ratios and one tighter bound:1.SPM/AP. This ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of roughly 2.62, closing the previous known bounds [e/(e – 1), e].2.AR/AP. This ratio studies the relative power of auction vs. pricing schemes, when no discrimination is allowed. We get the tight ratio of π2/6 ≈ 1.64, closing the previous known bounds [e/(e – 1), e].3.OPT/AR. This ratio studies the power of discrimination in auctions. Previously, the revenue gap is known to be in interval [2, e], and the lower-bound of 2 is conjectured to be tight [38, 37, 4]. We disprove this conjecture by obtaining a better lower-bound of 2.15.
In this article, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of 2, improving the … In this article, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of 2, improving the previous best known result of 3. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of 3 against the fractional optimum, improving the best known result of (2+ √ 2) for the weaker integral benchmark.
In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving … In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(\sqrt2 + 2)$ against the weaker integral benchmark.
Sparse Fourier transform (Sparse FT) is the problem of learning an unknown signal, whose frequency spectrum is dominated by a small amount of $k$ individual frequencies, through fast algorithms that … Sparse Fourier transform (Sparse FT) is the problem of learning an unknown signal, whose frequency spectrum is dominated by a small amount of $k$ individual frequencies, through fast algorithms that use as few samples as possible in the time domain. The last two decades have seen an extensive study on such problems, either in the one-/multi-dimensional discrete setting [Hassanieh, Indyk, Katabi, and Price STOC'12; Kapralov STOC'16] or in the one-dimensional continuous setting [Price and Song FOCS'15]. Despite this rich literature, the most general multi-dimensional continuous case remains mysterious. This paper initiates the study on the Sparse FT problem in the multi-dimensional continuous setting. Our main result is a randomized non-adaptive algorithm that uses sublinear samples and runs in sublinear time. In particular, the sample duration bound required by our algorithm gives a non-trivial improvement over [Price and Song FOCS'15], which studies the same problem in the one-dimensional continuous setting. The dimensionality in the continuous setting, different from both the discrete cases and the one-dimensional continuous case, turns out to incur many new challenges. To overcome these issues, we develop a number of new techniques for constructing the filter functions, designing the permutation-then-hashing schemes, sampling the Fourier measurements, and locating the frequencies. We believe these techniques can find their applications in the future studies on the Sparse FT problem.
In this note we prove bounds on the upper and lower probability tails of sums of independent geometric or exponentially distributed random variables. We also prove negative results showing that … In this note we prove bounds on the upper and lower probability tails of sums of independent geometric or exponentially distributed random variables. We also prove negative results showing that our established tail bounds are asymptotically tight.
This paper proves the tight sample complexity of {\sf Second-Price Auction with Anonymous Reserve}, up to a logarithmic factor, for each of all the value distribution families studied in the … This paper proves the tight sample complexity of {\sf Second-Price Auction with Anonymous Reserve}, up to a logarithmic factor, for each of all the value distribution families studied in the literature: $[0,\, 1]$-bounded, $[1,\, H]$-bounded, regular, and monotone hazard rate (MHR). Remarkably, the setting-specific tight sample complexity $\mathsf{poly}(\varepsilon^{-1})$ depends on the precision $\varepsilon \in (0, 1)$, but not on the number of bidders $n \geq 1$. Further, in the two bounded-support settings, our learning algorithm allows {\em correlated} value distributions. In contrast, the tight sample complexity $\tilde{\Theta}(n) \cdot \mathsf{poly}(\varepsilon^{-1})$ of {\sf Myerson Auction} proved by Guo, Huang and Zhang (STOC~2019) has a nearly-linear dependence on $n \geq 1$, and holds only for {\em independent} value distributions in every setting. We follow a similar framework as the Guo-Huang-Zhang work, but replace their information theoretical arguments with a direct proof.
We prove that the {\sf PoA} of {\sf First Price Auctions} is $1 - 1/e^2 \approx 0.8647$, closing the gap between the best known bounds $[0.7430,\, 0.8689]$. We prove that the {\sf PoA} of {\sf First Price Auctions} is $1 - 1/e^2 \approx 0.8647$, closing the gap between the best known bounds $[0.7430,\, 0.8689]$.
We consider two canonical Bayesian mechanism design settings. In the single-item setting, we prove tight approximation ratio for anonymous pricing: compared with Myerson Auction, it extracts at least $\frac{1}{2.62}$-fraction of … We consider two canonical Bayesian mechanism design settings. In the single-item setting, we prove tight approximation ratio for anonymous pricing: compared with Myerson Auction, it extracts at least $\frac{1}{2.62}$-fraction of revenue; there is a matching lower-bound example. In the unit-demand single-buyer setting, we prove tight approximation ratio between the simplest and optimal deterministic mechanisms: in terms of revenue, uniform pricing admits a $2.62$-approximation of item pricing; we further validate the tightness of this ratio. These results settle two open problems asked in~\cite{H13,CD15,AHNPY15,L17,JLTX18}. As an implication, in the single-item setting: we improve the approximation ratio of the second-price auction with anonymous reserve to $2.62$, which breaks the state-of-the-art upper bound of $e \approx 2.72$.
The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that … The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that can enhance the resolution of an imaging system and even transcend the diffraction limit of systems. Despite huge success in the application, super-resolution is not well understood on the theoretical side, especially for any dimension d ≥ 2. In particular, in order to recover a k-sparse signal, all previous results suffer from either/both poly(k) samples or running time.We design robust algorithms for any (constant) dimension under a strong noise model based on developing some new techniques in Sparse Fourier transform (Sparse FT), such as inverting a robust linear system, "eggshell" sampling schemes, and partition and voting methods in high dimension. These algorithms are the first to achieve running time and sample complexity (nearly) linear in the number of source points and logarithmic in bandwidth for any constant dimension, and we believe the techniques developed in the work can find their further applications on the Super-resolution and Sparse FT problem.* The full version of the paper can be accessed at https://arxiv.org/abs/2005.06156
In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the … In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark, or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(2+ \sqrt{2})$ for the weaker integral benchmark.
We prove that the PoA of First Price Auctions is 1-1/$ e^{2}\approx$0.8647, closing the gap between the best known bounds [0.7430, 0.8689]. We prove that the PoA of First Price Auctions is 1-1/$ e^{2}\approx$0.8647, closing the gap between the best known bounds [0.7430, 0.8689].
This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayesian Nash Equilibrium ⊊ Bayesian Correlated Equilibrium … This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayesian Nash Equilibrium ⊊ Bayesian Correlated Equilibrium ⊊ Bayesian Coarse Correlated Equilibrium.• Bayesian Nash Equilibrium: For independent valuations, the tight PoS is 1 − 1/e2 ≈ 0.8647, matching the counterpart Price of Anarchy (PoA) bound [JL22]. For correlated valuations, the tight PoS is 1 − 1/e ≈ 0.6321, matching the counterpart PoA bound [ST13, Syr14].This result indicates that, in the worst cases, efficiency degradation depends not on different selections among Bayesian Nash Equilibria.• Bayesian (Coarse) Correlated Equilibrium: For independent or correlated valuations, the tight PoS is always 1 = 100%, i.e., no efficiency degradation.This result indicates that First Price Auctions can be fully efficient when we allow the more general equilibrium concepts.* The full version of the paper can be accessed at https://arxiv.org/abs/2207.04455
This paper considers Bayesian revenue maximization in the k-unit setting, where a monopolist seller has k copies of an indivisible item and faces n unit-demand buyers (whose value distributions can … This paper considers Bayesian revenue maximization in the k-unit setting, where a monopolist seller has k copies of an indivisible item and faces n unit-demand buyers (whose value distributions can be non-identical). Four basic mechanisms among others have been widely employed in practice and widely studied in the literature: Myerson Auction, Sequential Posted-Pricing, (k + 1)-th Price Auction with Anonymous Reserve, and Anonymous Pricing. Regarding a pair of mechanisms, we investigate the largest possible ratio between the two revenues (a.k.a. the revenue gap), over all possible value distributions of the buyers. Divide these four mechanisms into two groups: (i) the discriminating mechanism group, Myerson Auction and Sequential Posted-Pricing, and (ii) the anonymous mechanism group, Anonymous Reserve and Anonymous Pricing. Within one group, the involved two mechanisms have an asymptotically tight revenue gap of 1 + Θ(1 / √k). In contrast, any two mechanisms from the different groups have an asymptotically tight revenue gap of Θ(łog k).
Given a set of $n$ input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm … Given a set of $n$ input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm that runs in time $O^*(3^{0.387n})$ in the~average case, significantly improving over the $O^*(3^{0.488n})$ running time of the best known worst-case algorithm and the Meet-in-the-Middle benchmark of $O^*(3^{0.5n})$. Our algorithm generalizes to a number of related problems, such as the ``Generalized Equal Subset Sum'' problem, which asks us to assign a coefficient $c_i$ from a set $C$ to each input number $x_i$ such that $\sum_{i} c_i x_i = 0$. Our algorithm for the average-case version of this problem runs in~time $|C|^{(0.5-c_0/|C|)n}$ for some positive constant $c_0$, whenever $C=\{0, \pm 1, \dots, \pm d\}$ or $\{\pm 1, \dots, \pm d\}$ for some positive integer $d$ (with $O^*(|C|^{0.45n})$ when $|C|<10$). Our results extend to the~problem of finding ``nearly balanced'' solutions in which the target is a not-too-large nonzero offset $\tau$. Our approach relies on new structural results that characterize the probability that $\sum_{i} c_i x_i$ $=\tau$ has a solution $c \in C^n$ when $x_i$'s are chosen randomly; these results may be of independent interest. Our algorithm is inspired by the ``representation technique'' introduced by Howgrave-Graham and Joux. This requires several new ideas to overcome preprocessing hurdles that arise in the representation framework, as well as a novel application of dynamic programming in the solution recovery phase of the algorithm.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Average-Case Subset Balancing ProblemsXi Chen, Yaonan Jin, Tim Randolph, and Rocco A. ServedioXi Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Average-Case Subset Balancing ProblemsXi Chen, Yaonan Jin, Tim Randolph, and Rocco A. ServedioXi Chen, Yaonan Jin, Tim Randolph, and Rocco A. Servediopp.743 - 778Chapter DOI:https://doi.org/10.1137/1.9781611977073.33PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Given a set of n input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm that runs in time O∗(30.387n) in the average case, significantly improving over the O∗(30.488n) running time of the best known worst-case algorithm [MNPW19] and the Meet-in-the-Middle benchmark of O∗(30.5n). Our algorithm generalizes to a number of related problems, such as the "Generalized Equal Subset Sum" problem, which asks us to assign a coefficient ci from a set C to each input number xi such that Σi cixi = 0. Our algorithm for the average-case version of this problem runs in time for some positive constant c0, whenever C = {0, ± 1, …, ± d} or {±1, …,±d} for some positive integer d (with runtime O∗(|C|0.45n) when |C| < 10). Our results extend to the problem of finding "nearly balanced" solutions in which the target is a not-too-large nonzero offset τ. Our approach relies on new structural results that characterize the probability that Σi cixi = τ has a solution c ∊ Cn when xi's are chosen randomly; these results may be of independent interest. Our algorithm is inspired by the "representation technique" introduced by Howgrave-Graham and Joux [HGJ10]. This requires several new ideas to overcome preprocessing hurdles that arise in the representation framework, as well as a novel application of dynamic programming in the solution recovery phase of the algorithm. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
This paper considers Bayesian revenue maximization in the -unit setting, where a monopolist seller has copies of an indivisible item and faces unit-demand buyers (whose value distributions can be nonidentical). … This paper considers Bayesian revenue maximization in the -unit setting, where a monopolist seller has copies of an indivisible item and faces unit-demand buyers (whose value distributions can be nonidentical). Four basic mechanisms among others have been widely employed in practice and widely studied in the literature: Myerson auction, sequential posted-pricing, -th price auction with anonymous reserve, and anonymous pricing. Regarding a pair of mechanisms, we investigate the largest possible ratio between the two revenues (also known as the revenue gap), over all possible value distributions of the buyers. Divide these four mechanisms into two groups: (i) the discriminating mechanism group, Myerson auction and sequential posted-pricing, and (ii) the anonymous mechanism group, anonymous reserve and anonymous pricing. Within one group, the involved two mechanisms have an asymptotically tight revenue gap of . In contrast, any two mechanisms from the different groups have an asymptotically tight revenue gap of .
How to maximize the revenue of a seller, who intends to sell a single indivisible item to a number of buyers, is a central problem in microeconomics. The Bayesian version … How to maximize the revenue of a seller, who intends to sell a single indivisible item to a number of buyers, is a central problem in microeconomics. The Bayesian version of this problem was settled by Myerson. Although illustrating theoretical beauty, Myerson's auction rarely appears in real-world applications, mainly because of three reasons: (1) it discriminates different buyers with different value priors. This may incur some fairness issues, and is not feasible in some markets; (2) it requires the buyers to report their private values, rather than to make take-it-or-leave-it decisions. This may raise some privacy concerns for the buyers; (3) it is an auction scheme rather than a pricing scheme, and therefore incorporates far more communication between the seller and the buy...[ Read more ]
This paper considers Bayesian revenue maximization in the $k$-unit setting, where a monopolist seller has $k$ copies of an indivisible item and faces $n$ unit-demand buyers (whose value distributions can … This paper considers Bayesian revenue maximization in the $k$-unit setting, where a monopolist seller has $k$ copies of an indivisible item and faces $n$ unit-demand buyers (whose value distributions can be non-identical). Four basic mechanisms among others have been widely employed in practice and widely studied in the literature: {\sf Myerson Auction}, {\sf Sequential Posted-Pricing}, {\sf $(k + 1)$-th Price Auction with Anonymous Reserve}, and {\sf Anonymous Pricing}. Regarding a pair of mechanisms, we investigate the largest possible ratio between the two revenues (a.k.a.\ the revenue gap), over all possible value distributions of the buyers. Divide these four mechanisms into two groups: (i)~the discriminating mechanism group, {\sf Myerson Auction} and {\sf Sequential Posted-Pricing}, and (ii)~the anonymous mechanism group, {\sf Anonymous Reserve} and {\sf Anonymous Pricing}. Within one group, the involved two mechanisms have an asymptotically tight revenue gap of $1 + \Theta(1 / \sqrt{k})$. In contrast, any two mechanisms from the different groups have an asymptotically tight revenue gap of $\Theta(\log k)$.
We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, … We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [CHHL19,CHLT19,CGLSS20] which show that upper bounds on Fourier growth (even at level $k=2$) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot O(d)^k$, and this is tight for any constant $k$. This quadratically strengthens an earlier bound that was implicit in [RSV13]. - We show that any read-$\Delta$ degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot (k \Delta d)^{O(k)}$. - We establish a composition theorem which gives $L_{1,k}$ bounds on disjoint compositions of functions that are closed under restrictions and admit $L_{1,k}$ bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of $\mathbb{F}_2$-polynomials.
We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, … We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [CHHL19,CHLT19,CGLSS20] which show that upper bounds on Fourier growth (even at level $k=2$) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot O(d)^k$, and this is tight for any constant $k$. This quadratically strengthens an earlier bound that was implicit in [RSV13]. - We show that any read-$\Delta$ degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot (k \Delta d)^{O(k)}$. - We establish a composition theorem which gives $L_{1,k}$ bounds on disjoint compositions of functions that are closed under restrictions and admit $L_{1,k}$ bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of $\mathbb{F}_2$-polynomials.
This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayes Nash Equilibrium $\subsetneq$ Bayes Correlated Equilibrium … This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayes Nash Equilibrium $\subsetneq$ Bayes Correlated Equilibrium $\subsetneq$ Bayes Coarse Correlated Equilibrium} $\bullet$ Bayes Nash Equilibrium: For independent valuations, the tight PoS is $1 - 1/ e^{2} \approx 0.8647$, matching the counterpart Price of Anarchy (PoA) bound \cite{JL22}. For correlated valuations, the tight $\PoS$ is $1 - 1 / e \approx 0.6321$, matching the counterpart PoA bound \cite{ST13,S14}. This result indicates that, in the worst cases, efficiency degradation depends not on different selections among Bayes Nash Equilibria. $\bullet$ Bayesian Coarse Correlated Equilibrium: For independent or correlated valuations, the tight PoS is always $1 = 100\%$, i.e., no efficiency degradation, different from the counterpart PoA bound $1 - 1 / e \approx 0.6321$ \cite{ST13,S14}. This result indicates that First Price Auctions can be fully efficient when we allow the more general equilibrium concepts.
We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of "structured" $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, … We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of "structured" $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [CHHL19,CHLT19,CGLSS20] which show that upper bounds on Fourier growth (even at level $k=2$) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot O(d)^k$, and this is tight for any constant $k$. This quadratically strengthens an earlier bound that was implicit in [RSV13]. - We show that any read-$\Delta$ degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot (k \Delta d)^{O(k)}$. - We establish a composition theorem which gives $L_{1,k}$ bounds on disjoint compositions of functions that are closed under restrictions and admit $L_{1,k}$ bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of $\mathbb{F}_2$-polynomials.
The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that … The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that can enhance the resolution of an imaging system and even transcend the diffraction limit of systems. Despite huge success in the application, super-resolution is not well understood on the theoretical side, especially for any dimension $d \geq 2$. In particular, in order to recover a $k$-sparse signal, all previous results suffer from either/both poly$(k)$ samples or running time. We design robust algorithms for any (constant) dimension under a strong noise model based on developing some new techniques in Sparse Fourier transform (Sparse FT), such as inverting a robust linear system, "eggshell" sampling schemes, and partition and voting methods in high dimension. These algorithms are the first to achieve running time and sample complexity (nearly) linear in the number of source points and logarithmic in bandwidth for any constant dimension, and we believe the techniques developed in the work can find their further applications on the Super-resolution and Sparse FT problem.
A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) $n$-input Subset Sum problem that runs in time $2^{(1/2 - c)n}$ for … A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) $n$-input Subset Sum problem that runs in time $2^{(1/2 - c)n}$ for some constant $c>0$. In this paper we give a Subset Sum algorithm with worst-case running time $O(2^{n/2} \cdot n^{-\gamma})$ for a constant $\gamma > 0.5023$ in standard word RAM or circuit RAM models. To the best of our knowledge, this is the first improvement on the classical ``meet-in-the-middle'' algorithm for worst-case Subset Sum, due to Horowitz and Sahni, which can be implemented in time $O(2^{n/2})$ in these memory models. Our algorithm combines a number of different techniques, including the ``representation method'' introduced by Howgrave-Graham and Joux and subsequent adaptations of the method in Austrin, Kaski, Koivisto, and Nederlof, and Nederlof and Wegrzycki, and ``bit-packing'' techniques used in the work of Baran, Demaine, and Patrascu on subquadratic algorithms for 3SUM.
A large proportion of the Bayesian mechanism design literature is restricted to the family of regular distributions $\mathbb{F}_{\tt reg}$ [Mye81] or the family of monotone hazard rate (MHR) distributions $\mathbb{F}_{\tt … A large proportion of the Bayesian mechanism design literature is restricted to the family of regular distributions $\mathbb{F}_{\tt reg}$ [Mye81] or the family of monotone hazard rate (MHR) distributions $\mathbb{F}_{\tt MHR}$ [BMP63], which overshadows this beautiful and well-developed theory. We (re-)introduce two generalizations, the family of quasi-regular distributions $\mathbb{F}_{\tt Q-reg}$ and the family of quasi-MHR distributions $\mathbb{F}_{\tt Q-MHR}$. All four families together form the following hierarchy: $\mathbb{F}_{\tt MHR} \subsetneq (\mathbb{F}_{\tt reg} \cap \mathbb{F}_{\tt Q-MHR}) \subsetneq \mathbb{F}_{\tt Q-reg}$ and $\mathbb{F}_{\tt Q-MHR} \subsetneq (\mathbb{F}_{\tt reg} \cup \mathbb{F}_{\tt Q-MHR}) \subsetneq \mathbb{F}_{\tt Q-reg}$. The significance of our new families is manifold. First, their defining conditions are immediate relaxations of the regularity/MHR conditions (i.e., monotonicity of the virtual value functions and/or the hazard rate functions), which reflect economic intuition. Second, they satisfy natural mathematical properties (about order statistics) that are violated by both original families $\mathbb{F}_{\tt reg}$ and $\mathbb{F}_{\tt MHR}$. Third but foremost, numerous results [BK96, HR09a, CD15, DRY15, HR14, AHN+19, JLTX20, JLQ+19b, FLR19, GHZ19b, JLX23, LM24] established before for regular/MHR distributions now can be generalized, with or even without quantitative losses.
A large proportion of the Bayesian mechanism design literature is restricted to the family of regular distributions $\mathbb{F}_{\tt reg}$ [Mye81] or the family of monotone hazard rate (MHR) distributions $\mathbb{F}_{\tt … A large proportion of the Bayesian mechanism design literature is restricted to the family of regular distributions $\mathbb{F}_{\tt reg}$ [Mye81] or the family of monotone hazard rate (MHR) distributions $\mathbb{F}_{\tt MHR}$ [BMP63], which overshadows this beautiful and well-developed theory. We (re-)introduce two generalizations, the family of quasi-regular distributions $\mathbb{F}_{\tt Q-reg}$ and the family of quasi-MHR distributions $\mathbb{F}_{\tt Q-MHR}$. All four families together form the following hierarchy: $\mathbb{F}_{\tt MHR} \subsetneq (\mathbb{F}_{\tt reg} \cap \mathbb{F}_{\tt Q-MHR}) \subsetneq \mathbb{F}_{\tt Q-reg}$ and $\mathbb{F}_{\tt Q-MHR} \subsetneq (\mathbb{F}_{\tt reg} \cup \mathbb{F}_{\tt Q-MHR}) \subsetneq \mathbb{F}_{\tt Q-reg}$. The significance of our new families is manifold. First, their defining conditions are immediate relaxations of the regularity/MHR conditions (i.e., monotonicity of the virtual value functions and/or the hazard rate functions), which reflect economic intuition. Second, they satisfy natural mathematical properties (about order statistics) that are violated by both original families $\mathbb{F}_{\tt reg}$ and $\mathbb{F}_{\tt MHR}$. Third but foremost, numerous results [BK96, HR09a, CD15, DRY15, HR14, AHN+19, JLTX20, JLQ+19b, FLR19, GHZ19b, JLX23, LM24] established before for regular/MHR distributions now can be generalized, with or even without quantitative losses.
This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayesian Nash Equilibrium ⊊ Bayesian Correlated Equilibrium … This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayesian Nash Equilibrium ⊊ Bayesian Correlated Equilibrium ⊊ Bayesian Coarse Correlated Equilibrium.• Bayesian Nash Equilibrium: For independent valuations, the tight PoS is 1 − 1/e2 ≈ 0.8647, matching the counterpart Price of Anarchy (PoA) bound [JL22]. For correlated valuations, the tight PoS is 1 − 1/e ≈ 0.6321, matching the counterpart PoA bound [ST13, Syr14].This result indicates that, in the worst cases, efficiency degradation depends not on different selections among Bayesian Nash Equilibria.• Bayesian (Coarse) Correlated Equilibrium: For independent or correlated valuations, the tight PoS is always 1 = 100%, i.e., no efficiency degradation.This result indicates that First Price Auctions can be fully efficient when we allow the more general equilibrium concepts.* The full version of the paper can be accessed at https://arxiv.org/abs/2207.04455
The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that … The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that can enhance the resolution of an imaging system and even transcend the diffraction limit of systems. Despite huge success in the application, super-resolution is not well understood on the theoretical side, especially for any dimension d ≥ 2. In particular, in order to recover a k-sparse signal, all previous results suffer from either/both poly(k) samples or running time.We design robust algorithms for any (constant) dimension under a strong noise model based on developing some new techniques in Sparse Fourier transform (Sparse FT), such as inverting a robust linear system, "eggshell" sampling schemes, and partition and voting methods in high dimension. These algorithms are the first to achieve running time and sample complexity (nearly) linear in the number of source points and logarithmic in bandwidth for any constant dimension, and we believe the techniques developed in the work can find their further applications on the Super-resolution and Sparse FT problem.* The full version of the paper can be accessed at https://arxiv.org/abs/2005.06156
A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) $n$-input Subset Sum problem that runs in time $2^{(1/2 - c)n}$ for … A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) $n$-input Subset Sum problem that runs in time $2^{(1/2 - c)n}$ for some constant $c>0$. In this paper we give a Subset Sum algorithm with worst-case running time $O(2^{n/2} \cdot n^{-\gamma})$ for a constant $\gamma > 0.5023$ in standard word RAM or circuit RAM models. To the best of our knowledge, this is the first improvement on the classical ``meet-in-the-middle'' algorithm for worst-case Subset Sum, due to Horowitz and Sahni, which can be implemented in time $O(2^{n/2})$ in these memory models. Our algorithm combines a number of different techniques, including the ``representation method'' introduced by Howgrave-Graham and Joux and subsequent adaptations of the method in Austrin, Kaski, Koivisto, and Nederlof, and Nederlof and Wegrzycki, and ``bit-packing'' techniques used in the work of Baran, Demaine, and Patrascu on subquadratic algorithms for 3SUM.
This paper considers Bayesian revenue maximization in the -unit setting, where a monopolist seller has copies of an indivisible item and faces unit-demand buyers (whose value distributions can be nonidentical). … This paper considers Bayesian revenue maximization in the -unit setting, where a monopolist seller has copies of an indivisible item and faces unit-demand buyers (whose value distributions can be nonidentical). Four basic mechanisms among others have been widely employed in practice and widely studied in the literature: Myerson auction, sequential posted-pricing, -th price auction with anonymous reserve, and anonymous pricing. Regarding a pair of mechanisms, we investigate the largest possible ratio between the two revenues (also known as the revenue gap), over all possible value distributions of the buyers. Divide these four mechanisms into two groups: (i) the discriminating mechanism group, Myerson auction and sequential posted-pricing, and (ii) the anonymous mechanism group, anonymous reserve and anonymous pricing. Within one group, the involved two mechanisms have an asymptotically tight revenue gap of . In contrast, any two mechanisms from the different groups have an asymptotically tight revenue gap of .
We prove that the PoA of First Price Auctions is 1-1/$ e^{2}\approx$0.8647, closing the gap between the best known bounds [0.7430, 0.8689]. We prove that the PoA of First Price Auctions is 1-1/$ e^{2}\approx$0.8647, closing the gap between the best known bounds [0.7430, 0.8689].
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Average-Case Subset Balancing ProblemsXi Chen, Yaonan Jin, Tim Randolph, and Rocco A. ServedioXi Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Average-Case Subset Balancing ProblemsXi Chen, Yaonan Jin, Tim Randolph, and Rocco A. ServedioXi Chen, Yaonan Jin, Tim Randolph, and Rocco A. Servediopp.743 - 778Chapter DOI:https://doi.org/10.1137/1.9781611977073.33PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Given a set of n input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm that runs in time O∗(30.387n) in the average case, significantly improving over the O∗(30.488n) running time of the best known worst-case algorithm [MNPW19] and the Meet-in-the-Middle benchmark of O∗(30.5n). Our algorithm generalizes to a number of related problems, such as the "Generalized Equal Subset Sum" problem, which asks us to assign a coefficient ci from a set C to each input number xi such that Σi cixi = 0. Our algorithm for the average-case version of this problem runs in time for some positive constant c0, whenever C = {0, ± 1, …, ± d} or {±1, …,±d} for some positive integer d (with runtime O∗(|C|0.45n) when |C| < 10). Our results extend to the problem of finding "nearly balanced" solutions in which the target is a not-too-large nonzero offset τ. Our approach relies on new structural results that characterize the probability that Σi cixi = τ has a solution c ∊ Cn when xi's are chosen randomly; these results may be of independent interest. Our algorithm is inspired by the "representation technique" introduced by Howgrave-Graham and Joux [HGJ10]. This requires several new ideas to overcome preprocessing hurdles that arise in the representation framework, as well as a novel application of dynamic programming in the solution recovery phase of the algorithm. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We prove that the {\sf PoA} of {\sf First Price Auctions} is $1 - 1/e^2 \approx 0.8647$, closing the gap between the best known bounds $[0.7430,\, 0.8689]$. We prove that the {\sf PoA} of {\sf First Price Auctions} is $1 - 1/e^2 \approx 0.8647$, closing the gap between the best known bounds $[0.7430,\, 0.8689]$.
This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayes Nash Equilibrium $\subsetneq$ Bayes Correlated Equilibrium … This paper establishes the Price of Stability (PoS) for First Price Auctions, for all equilibrium concepts that have been studied in the literature: Bayes Nash Equilibrium $\subsetneq$ Bayes Correlated Equilibrium $\subsetneq$ Bayes Coarse Correlated Equilibrium} $\bullet$ Bayes Nash Equilibrium: For independent valuations, the tight PoS is $1 - 1/ e^{2} \approx 0.8647$, matching the counterpart Price of Anarchy (PoA) bound \cite{JL22}. For correlated valuations, the tight $\PoS$ is $1 - 1 / e \approx 0.6321$, matching the counterpart PoA bound \cite{ST13,S14}. This result indicates that, in the worst cases, efficiency degradation depends not on different selections among Bayes Nash Equilibria. $\bullet$ Bayesian Coarse Correlated Equilibrium: For independent or correlated valuations, the tight PoS is always $1 = 100\%$, i.e., no efficiency degradation, different from the counterpart PoA bound $1 - 1 / e \approx 0.6321$ \cite{ST13,S14}. This result indicates that First Price Auctions can be fully efficient when we allow the more general equilibrium concepts.
We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, … We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [CHHL19,CHLT19,CGLSS20] which show that upper bounds on Fourier growth (even at level $k=2$) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot O(d)^k$, and this is tight for any constant $k$. This quadratically strengthens an earlier bound that was implicit in [RSV13]. - We show that any read-$\Delta$ degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot (k \Delta d)^{O(k)}$. - We establish a composition theorem which gives $L_{1,k}$ bounds on disjoint compositions of functions that are closed under restrictions and admit $L_{1,k}$ bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of $\mathbb{F}_2$-polynomials.
We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, … We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of structured $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [CHHL19,CHLT19,CGLSS20] which show that upper bounds on Fourier growth (even at level $k=2$) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot O(d)^k$, and this is tight for any constant $k$. This quadratically strengthens an earlier bound that was implicit in [RSV13]. - We show that any read-$\Delta$ degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot (k \Delta d)^{O(k)}$. - We establish a composition theorem which gives $L_{1,k}$ bounds on disjoint compositions of functions that are closed under restrictions and admit $L_{1,k}$ bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of $\mathbb{F}_2$-polynomials.
This paper considers Bayesian revenue maximization in the k-unit setting, where a monopolist seller has k copies of an indivisible item and faces n unit-demand buyers (whose value distributions can … This paper considers Bayesian revenue maximization in the k-unit setting, where a monopolist seller has k copies of an indivisible item and faces n unit-demand buyers (whose value distributions can be non-identical). Four basic mechanisms among others have been widely employed in practice and widely studied in the literature: Myerson Auction, Sequential Posted-Pricing, (k + 1)-th Price Auction with Anonymous Reserve, and Anonymous Pricing. Regarding a pair of mechanisms, we investigate the largest possible ratio between the two revenues (a.k.a. the revenue gap), over all possible value distributions of the buyers. Divide these four mechanisms into two groups: (i) the discriminating mechanism group, Myerson Auction and Sequential Posted-Pricing, and (ii) the anonymous mechanism group, Anonymous Reserve and Anonymous Pricing. Within one group, the involved two mechanisms have an asymptotically tight revenue gap of 1 + Θ(1 / √k). In contrast, any two mechanisms from the different groups have an asymptotically tight revenue gap of Θ(łog k).
This paper considers Bayesian revenue maximization in the $k$-unit setting, where a monopolist seller has $k$ copies of an indivisible item and faces $n$ unit-demand buyers (whose value distributions can … This paper considers Bayesian revenue maximization in the $k$-unit setting, where a monopolist seller has $k$ copies of an indivisible item and faces $n$ unit-demand buyers (whose value distributions can be non-identical). Four basic mechanisms among others have been widely employed in practice and widely studied in the literature: {\sf Myerson Auction}, {\sf Sequential Posted-Pricing}, {\sf $(k + 1)$-th Price Auction with Anonymous Reserve}, and {\sf Anonymous Pricing}. Regarding a pair of mechanisms, we investigate the largest possible ratio between the two revenues (a.k.a.\ the revenue gap), over all possible value distributions of the buyers. Divide these four mechanisms into two groups: (i)~the discriminating mechanism group, {\sf Myerson Auction} and {\sf Sequential Posted-Pricing}, and (ii)~the anonymous mechanism group, {\sf Anonymous Reserve} and {\sf Anonymous Pricing}. Within one group, the involved two mechanisms have an asymptotically tight revenue gap of $1 + \Theta(1 / \sqrt{k})$. In contrast, any two mechanisms from the different groups have an asymptotically tight revenue gap of $\Theta(\log k)$.
Given a set of $n$ input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm … Given a set of $n$ input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm that runs in time $O^*(3^{0.387n})$ in the~average case, significantly improving over the $O^*(3^{0.488n})$ running time of the best known worst-case algorithm and the Meet-in-the-Middle benchmark of $O^*(3^{0.5n})$. Our algorithm generalizes to a number of related problems, such as the ``Generalized Equal Subset Sum'' problem, which asks us to assign a coefficient $c_i$ from a set $C$ to each input number $x_i$ such that $\sum_{i} c_i x_i = 0$. Our algorithm for the average-case version of this problem runs in~time $|C|^{(0.5-c_0/|C|)n}$ for some positive constant $c_0$, whenever $C=\{0, \pm 1, \dots, \pm d\}$ or $\{\pm 1, \dots, \pm d\}$ for some positive integer $d$ (with $O^*(|C|^{0.45n})$ when $|C|<10$). Our results extend to the~problem of finding ``nearly balanced'' solutions in which the target is a not-too-large nonzero offset $\tau$. Our approach relies on new structural results that characterize the probability that $\sum_{i} c_i x_i$ $=\tau$ has a solution $c \in C^n$ when $x_i$'s are chosen randomly; these results may be of independent interest. Our algorithm is inspired by the ``representation technique'' introduced by Howgrave-Graham and Joux. This requires several new ideas to overcome preprocessing hurdles that arise in the representation framework, as well as a novel application of dynamic programming in the solution recovery phase of the algorithm.
We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of "structured" $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, … We analyze the Fourier growth, i.e. the $L_1$ Fourier weight at level $k$ (denoted $L_{1,k}$), of various well-studied classes of "structured" $\mathbb{F}_2$-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [CHHL19,CHLT19,CGLSS20] which show that upper bounds on Fourier growth (even at level $k=2$) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot O(d)^k$, and this is tight for any constant $k$. This quadratically strengthens an earlier bound that was implicit in [RSV13]. - We show that any read-$\Delta$ degree-$d$ $\mathbb{F}_2$-polynomial $p$ has $L_{1,k}(p) \le \Pr[p=1] \cdot (k \Delta d)^{O(k)}$. - We establish a composition theorem which gives $L_{1,k}$ bounds on disjoint compositions of functions that are closed under restrictions and admit $L_{1,k}$ bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of $\mathbb{F}_2$-polynomials.
In this article, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of 2, improving the … In this article, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of 2, improving the previous best known result of 3. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of 3 against the fractional optimum, improving the best known result of (2+ √ 2) for the weaker integral benchmark.
Sparse Fourier transform (Sparse FT) is the problem of learning an unknown signal, whose frequency spectrum is dominated by a small amount of $k$ individual frequencies, through fast algorithms that … Sparse Fourier transform (Sparse FT) is the problem of learning an unknown signal, whose frequency spectrum is dominated by a small amount of $k$ individual frequencies, through fast algorithms that use as few samples as possible in the time domain. The last two decades have seen an extensive study on such problems, either in the one-/multi-dimensional discrete setting [Hassanieh, Indyk, Katabi, and Price STOC'12; Kapralov STOC'16] or in the one-dimensional continuous setting [Price and Song FOCS'15]. Despite this rich literature, the most general multi-dimensional continuous case remains mysterious. This paper initiates the study on the Sparse FT problem in the multi-dimensional continuous setting. Our main result is a randomized non-adaptive algorithm that uses sublinear samples and runs in sublinear time. In particular, the sample duration bound required by our algorithm gives a non-trivial improvement over [Price and Song FOCS'15], which studies the same problem in the one-dimensional continuous setting. The dimensionality in the continuous setting, different from both the discrete cases and the one-dimensional continuous case, turns out to incur many new challenges. To overcome these issues, we develop a number of new techniques for constructing the filter functions, designing the permutation-then-hashing schemes, sampling the Fourier measurements, and locating the frequencies. We believe these techniques can find their applications in the future studies on the Sparse FT problem.
We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. … We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. There are four widely used and widely studied mechanisms in the literature: Myerson Auction (OPT), Sequential Posted-Pricing (SPM), Second-Price Auction with Anonymous Reserve (AR), and Anonymous Pricing (AP). OPT is revenue-optimal but complicated and also experiences several issues in practice such as fairness; AP is the simplest mechanism but also generates the lowest revenue among these four mechanisms; SPM and AR are of intermediate complexity and revenue. We explore revenue gaps among these mechanisms, each of which is defined as the largest ratio between revenues from a pair of mechanisms. We establish two tight bounds and one tighter bound: 1. SPM vs. AP: this ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of constant ${\cal{C}}^* \approx {2.62}$, closing the gap between $[\frac{e}{e - 1}, e]$ left before. 2. AR vs. AP: this ratio measures the relative power of auction scheme vs. pricing scheme, when no discrimination is allowed. We attain the tight ratio of $\frac{\pi^2}{6} \approx 1.64$, closing the previously known bounds $[\frac{e}{e - 1}, e]$. 3. OPT vs. AR: this ratio quantifies the power of discrimination in auction schemes and is previously known to be somewhere between [2, e]. The lower bound of 2 was conjectured to be tight by Hartline and Roughgarden [Proceedings of the 10th ACM Conference on Electronic Commerce, 2009, pp. 225--234] and Alaei et al. [Games Econom. Behav., 118 (2019), pp. 494--510]. We acquire a better lower bound of 2.15 and thus disprove this conjecture.
The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that … The ability to resolve detail in the object that is being imaged, named by resolution, is the core parameter of an imaging system. Super-resolution is a class of techniques that can enhance the resolution of an imaging system and even transcend the diffraction limit of systems. Despite huge success in the application, super-resolution is not well understood on the theoretical side, especially for any dimension $d \geq 2$. In particular, in order to recover a $k$-sparse signal, all previous results suffer from either/both poly$(k)$ samples or running time. We design robust algorithms for any (constant) dimension under a strong noise model based on developing some new techniques in Sparse Fourier transform (Sparse FT), such as inverting a robust linear system, "eggshell" sampling schemes, and partition and voting methods in high dimension. These algorithms are the first to achieve running time and sample complexity (nearly) linear in the number of source points and logarithmic in bandwidth for any constant dimension, and we believe the techniques developed in the work can find their further applications on the Super-resolution and Sparse FT problem.
This paper considers two canonical Bayesian mechanism design settings. In the single-item setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always … This paper considers two canonical Bayesian mechanism design settings. In the single-item setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62-fraction of the revenue; (2) there is a matching lower-bound instance.
In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving … In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(\sqrt2 + 2)$ against the weaker integral benchmark.
We consider a fundamental problem in microeconomics: Selling a single item among a number of buyers whose values are drawn from known independent and regular distributions. There are four widely-used … We consider a fundamental problem in microeconomics: Selling a single item among a number of buyers whose values are drawn from known independent and regular distributions. There are four widely-used and widely-studied mechanisms in this literature: Anonymous Posted-Pricing (AP), Second-Price Auction with Anonymous Reserve (AR), Sequential Posted-Pricing (SPM), and Myerson Auction (OPT). Myerson Auction is optimal but complicated, which also suffers a few issues in practice such as fairness; AP is the simplest mechanism, but its revenue is also the lowest among these four; AR and SPM are of intermediate complexity and revenue. We study the revenue gaps among these four mechanisms, which is defined as the largest ratio between revenues from two mechanisms. We establish two tight ratios and one tighter bound:1.SPM/AP. This ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of roughly 2.62, closing the previous known bounds [e/(e – 1), e].2.AR/AP. This ratio studies the relative power of auction vs. pricing schemes, when no discrimination is allowed. We get the tight ratio of π2/6 ≈ 1.64, closing the previous known bounds [e/(e – 1), e].3.OPT/AR. This ratio studies the power of discrimination in auctions. Previously, the revenue gap is known to be in interval [2, e], and the lower-bound of 2 is conjectured to be tight [38, 37, 4]. We disprove this conjecture by obtaining a better lower-bound of 2.15.
In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the … In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark, or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(2+ \sqrt{2})$ for the weaker integral benchmark.
In this note we prove bounds on the upper and lower probability tails of sums of independent geometric or exponentially distributed random variables. We also prove negative results showing that … In this note we prove bounds on the upper and lower probability tails of sums of independent geometric or exponentially distributed random variables. We also prove negative results showing that our established tail bounds are asymptotically tight.
This paper proves the tight sample complexity of {\sf Second-Price Auction with Anonymous Reserve}, up to a logarithmic factor, for each of all the value distribution families studied in the … This paper proves the tight sample complexity of {\sf Second-Price Auction with Anonymous Reserve}, up to a logarithmic factor, for each of all the value distribution families studied in the literature: $[0,\, 1]$-bounded, $[1,\, H]$-bounded, regular, and monotone hazard rate (MHR). Remarkably, the setting-specific tight sample complexity $\mathsf{poly}(\varepsilon^{-1})$ depends on the precision $\varepsilon \in (0, 1)$, but not on the number of bidders $n \geq 1$. Further, in the two bounded-support settings, our learning algorithm allows {\em correlated} value distributions. In contrast, the tight sample complexity $\tilde{\Theta}(n) \cdot \mathsf{poly}(\varepsilon^{-1})$ of {\sf Myerson Auction} proved by Guo, Huang and Zhang (STOC~2019) has a nearly-linear dependence on $n \geq 1$, and holds only for {\em independent} value distributions in every setting. We follow a similar framework as the Guo-Huang-Zhang work, but replace their information theoretical arguments with a direct proof.
How to maximize the revenue of a seller, who intends to sell a single indivisible item to a number of buyers, is a central problem in microeconomics. The Bayesian version … How to maximize the revenue of a seller, who intends to sell a single indivisible item to a number of buyers, is a central problem in microeconomics. The Bayesian version of this problem was settled by Myerson. Although illustrating theoretical beauty, Myerson's auction rarely appears in real-world applications, mainly because of three reasons: (1) it discriminates different buyers with different value priors. This may incur some fairness issues, and is not feasible in some markets; (2) it requires the buyers to report their private values, rather than to make take-it-or-leave-it decisions. This may raise some privacy concerns for the buyers; (3) it is an auction scheme rather than a pricing scheme, and therefore incorporates far more communication between the seller and the buy...[ Read more ]
We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. … We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. There are four widely-used and widely-studied mechanisms in the literature: {\sf Myerson Auction}~({\sf OPT}), {\sf Sequential Posted-Pricing}~({\sf SPM}), {\sf Second-Price Auction with Anonymous Reserve}~({\sf AR}), and {\sf Anonymous Pricing}~({\sf AP}). {\sf OPT} is revenue-optimal but complicated, which also experiences several issues in practice such as fairness; {\sf AP} is the simplest mechanism, but also generates the lowest revenue among these four mechanisms; {\sf SPM} and {\sf AR} are of intermediate complexity and revenue. We explore revenue gaps among these mechanisms, each of which is defined as the largest ratio between revenues from a pair of mechanisms. We establish two tight bounds and one improved bound: 1. {\sf SPM} vs.\ {\sf AP}: this ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of $\mathcal{C^*} \approx 2.62$, closing the gap between $\big[\frac{e}{e - 1}, e\big]$ left before. 2. {\sf AR} vs.\ {\sf AP}: this ratio measures the relative power of auction scheme vs.\ pricing scheme, when no discrimination is allowed. We attain the tight ratio of $\frac{\pi^2}{6} \approx 1.64$, closing the previously known bounds $\big[\frac{e}{e - 1}, e\big]$. 3. {\sf OPT} vs.\ {\sf AR}: this ratio quantifies the power of discrimination in auction schemes, and is previously known to be somewhere between $\big[2, e\big]$. The lower-bound of $2$ was conjectured to be tight by Hartline and Roughgarden (2009) and Alaei et al.\ (2015). We acquire a better lower-bound of $2.15$, and thus disprove this conjecture.
We consider two canonical Bayesian mechanism design settings. In the single-item setting, we prove tight approximation ratio for anonymous pricing: compared with Myerson Auction, it extracts at least $\frac{1}{2.62}$-fraction of … We consider two canonical Bayesian mechanism design settings. In the single-item setting, we prove tight approximation ratio for anonymous pricing: compared with Myerson Auction, it extracts at least $\frac{1}{2.62}$-fraction of revenue; there is a matching lower-bound example. In the unit-demand single-buyer setting, we prove tight approximation ratio between the simplest and optimal deterministic mechanisms: in terms of revenue, uniform pricing admits a $2.62$-approximation of item pricing; we further validate the tightness of this ratio. These results settle two open problems asked in~\cite{H13,CD15,AHNPY15,L17,JLTX18}. As an implication, in the single-item setting: we improve the approximation ratio of the second-price auction with anonymous reserve to $2.62$, which breaks the state-of-the-art upper bound of $e \approx 2.72$.
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be … We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints (i.e., that the total allocation of each item should not exceed the supply). Our framework is general in the sense that it makes no direct assumption about agents' valuations, type distributions, or single agent constraints (e.g., budget, incentive compatibility, etc.). We present two generic multiagent mechanisms which use single agent mechanisms as black boxes. If an $\alpha$-approximate single agent mechanism is available for each agent, and assuming no agent ever demands more than $\frac{1}{k}$ of all units of each item, our generic multiagent mechanisms are $\gamma_{k}\alpha$-approximations of the optimal multiagent mechanism, where $\gamma_{k}$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. As a byproduct of our construction, we present a generalization of prophet inequalities where both gambler and prophet are allowed to pick $k$ numbers each to receive a reward equal to their sum. Finally, we use our framework to obtain multiagent mechanisms with improved approximation factor for several settings from the literature.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, … We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
This paper considers two canonical Bayesian mechanism design settings. In the single-item setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always … This paper considers two canonical Bayesian mechanism design settings. In the single-item setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62-fraction of the revenue; (2) there is a matching lower-bound instance.
We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction … We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction design literature as it requires no probabilistic assumptions on buyers' valuations and employs the framework of competitive analysis. Our objective is to optimize the worst-case performance of an auction, measured by the ratio between a given benchmark and revenue generated by the auction.
We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. … We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. There are four widely used and widely studied mechanisms in the literature: Myerson Auction (OPT), Sequential Posted-Pricing (SPM), Second-Price Auction with Anonymous Reserve (AR), and Anonymous Pricing (AP). OPT is revenue-optimal but complicated and also experiences several issues in practice such as fairness; AP is the simplest mechanism but also generates the lowest revenue among these four mechanisms; SPM and AR are of intermediate complexity and revenue. We explore revenue gaps among these mechanisms, each of which is defined as the largest ratio between revenues from a pair of mechanisms. We establish two tight bounds and one tighter bound: 1. SPM vs. AP: this ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of constant ${\cal{C}}^* \approx {2.62}$, closing the gap between $[\frac{e}{e - 1}, e]$ left before. 2. AR vs. AP: this ratio measures the relative power of auction scheme vs. pricing scheme, when no discrimination is allowed. We attain the tight ratio of $\frac{\pi^2}{6} \approx 1.64$, closing the previously known bounds $[\frac{e}{e - 1}, e]$. 3. OPT vs. AR: this ratio quantifies the power of discrimination in auction schemes and is previously known to be somewhere between [2, e]. The lower bound of 2 was conjectured to be tight by Hartline and Roughgarden [Proceedings of the 10th ACM Conference on Electronic Commerce, 2009, pp. 225--234] and Alaei et al. [Games Econom. Behav., 118 (2019), pp. 494--510]. We acquire a better lower bound of 2.15 and thus disprove this conjecture.
We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely … We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely used selling mechanisms: the second price auction with \emph{eager} personalized reserve prices and the sequential posted price mechanism. Using a new approach, we improve the best-known performance guarantees for these mechanisms. We show that for every value of the number of buyers $n$, the eager second price (ESP) auction and sequential posted price mechanisms respectively earn at least $0.6620$ and $0.6543$ fractions of the optimal revenue. We also provide improved performance guarantees for these mechanisms when the number of buyers is small, which is the more relevant regime for many applications of interest. This in particular implies an improved bound of $0.6543$ for free-order prophet inequalities. Motivated by our improved revenue bounds, we further study the problem of optimizing reserve prices in the ESP auctions when the sorted order of personalized reserve prices among bidders is exogenous. We show that this problem can be solved polynomially. In addition, by analyzing a real auction dataset from Google's advertising exchange, we demonstrate the effectiveness of order-based pricing.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves (n-1)/n of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from 0.5 to 0.512 of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, … We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
We consider the problem of maximizing the expected revenue from selling $k$ homogeneous goods to $n$ unit-demand buyers who arrive sequentially with independent and identically distributed valuations. In this setting … We consider the problem of maximizing the expected revenue from selling $k$ homogeneous goods to $n$ unit-demand buyers who arrive sequentially with independent and identically distributed valuations. In this setting the optimal posted prices are dynamic in the sense that they depend on the remaining numbers of goods and buyers. We investigate how much revenue is lost when a single static price is used instead for all buyers and goods, and prove upper bounds on the ratio between the maximum revenue from dynamic prices and that from static prices. These bounds are tight for all values of $k$ and $n$ and vary depending on a regularity property of the underlying distribution. For general distributions we obtain a ratio of $2-k/n$, for regular distributions a ratio that increases in $n$ and is bounded from above by $1/(1-k^k/(e^{k}k!))$, which is roughly $1/(1-1/(\sqrt{2πk}))$. The lower bounds hold for the revenue gap between dynamic and static prices. The upper bounds are obtained via an ex-ante relaxation of the revenue maximization problem, as a consequence the tight bounds of $2-k/n$ in the general case and of $1/(1-1/(\sqrt{2πk}))$ in the regular case apply also to the potentially larger revenue gap between the optimal incentive-compatible mechanism and the optimal static price. Our results imply that for regular distributions the benefit of dynamic prices vanishes while for non-regular distributions dynamic prices may achieve up to twice the revenue of static prices.
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents … We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents and a budget, while the agents have private costs for being hired. We consider both additive and submodular value functions of the principal. We show that there are simple, practical, ex post budget balanced posted pricing mechanisms that approximate the value obtained by the Bayesian optimal mechanism that is budget balanced only in expectation. A main motivating application for this work is crowdsourcing, e.g., on Mechanical Turk, where workers are drawn from a large population and posted pricing is standard. Our analysis methods relate to contention resolution schemes in submodular optimization of Vondràk et al. and the correlation gap analysis of Yan.
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget … Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items … The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [33] and Feldman et al. [17] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1 – 1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [45] and Esfandiari et al. [15] who worked in the special cases where we can fully control the arrival order or when there is only a single item.Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 … The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
For selling a single item to agents with independent but non-identically distributed values, the revenue optimal auction is complex. With respect to it, Hartline and Roughgarden (2009) showed that the … For selling a single item to agents with independent but non-identically distributed values, the revenue optimal auction is complex. With respect to it, Hartline and Roughgarden (2009) showed that the approximation factor of the second-price auction with an anonymous reserve is between two and four. We consider the more demanding problem of approximating the revenue of the ex ante relaxation of the auction problem by posting an anonymous price (while supplies last) and prove that their worst-case ratio is e. As a corollary, the upper-bound of anonymous pricing or anonymous reserves versus the optimal auction improves from four to $e$. We conclude that, up to an $e$ factor, discrimination and simultaneity are unimportant for driving revenue in single-item auctions.
Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their … Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e ≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see … In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 − 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multi-threshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and … We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and subadditive. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders' distributions over valuations is given, for both regular distributions and arbitrary distributions with bounded support. Here, our algorithms require polynomially many samples in the number of items and bidders. The second is a more general max-min learning setting that we introduce, where we are given "approximate distributions," and we seek to compute a mechanism whose revenue is approximately optimal simultaneously for all "true distributions" that are close to the ones we were given. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions. All our results hold for valuation distributions satisfying the standard (and necessary) independence-across-items property. They also generalize and improve upon recent works of Goldner and Karlin [25] and Morgenstern and Roughgarden [32], which have provided algorithms that learn approximately optimal multi-item mechanisms in more restricted settings with additive, subadditive and unit-demand valuations using sample access to distributions. We generalize these results to the complete unit-demand, additive, and XOS setting, to i.i.d. subadditive bidders, and to the max-min setting. Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest.
We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. … We consider a fundamental problem in microeconomics: selling a single item to a number of potential buyers, whose values are drawn from known independent and regular (not necessarily identical) distributions. There are four widely-used and widely-studied mechanisms in the literature: {\sf Myerson Auction}~({\sf OPT}), {\sf Sequential Posted-Pricing}~({\sf SPM}), {\sf Second-Price Auction with Anonymous Reserve}~({\sf AR}), and {\sf Anonymous Pricing}~({\sf AP}). {\sf OPT} is revenue-optimal but complicated, which also experiences several issues in practice such as fairness; {\sf AP} is the simplest mechanism, but also generates the lowest revenue among these four mechanisms; {\sf SPM} and {\sf AR} are of intermediate complexity and revenue. We explore revenue gaps among these mechanisms, each of which is defined as the largest ratio between revenues from a pair of mechanisms. We establish two tight bounds and one improved bound: 1. {\sf SPM} vs.\ {\sf AP}: this ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of $\mathcal{C^*} \approx 2.62$, closing the gap between $\big[\frac{e}{e - 1}, e\big]$ left before. 2. {\sf AR} vs.\ {\sf AP}: this ratio measures the relative power of auction scheme vs.\ pricing scheme, when no discrimination is allowed. We attain the tight ratio of $\frac{\pi^2}{6} \approx 1.64$, closing the previously known bounds $\big[\frac{e}{e - 1}, e\big]$. 3. {\sf OPT} vs.\ {\sf AR}: this ratio quantifies the power of discrimination in auction schemes, and is previously known to be somewhere between $\big[2, e\big]$. The lower-bound of $2$ was conjectured to be tight by Hartline and Roughgarden (2009) and Alaei et al.\ (2015). We acquire a better lower-bound of $2.15$, and thus disprove this conjecture.
We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, How auctions, and cut auctions. For Vertex Cover auctions, the vertices … We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, How auctions, and cut auctions. For Vertex Cover auctions, the vertices are owned by selfish and rational agents, and the auctioneer wants to purchase a vertex cover from them. For k-flow auctions, the edges are owned by the agents, and the auctioneer wants to purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know their costs, and the auctioneer needs to select a feasible set and payments based on bids made by the agents. We present constant-competitive truthful mechanisms for all three set systems. That is, the maximum overpayment of the mechanism is within a constant factor of the maximum overpayment of any truthful mechanism, for every set system in the class. The mechanism for Vertex Cover is based on scaling each bid by a multiplier derived from the dominant eigenvector of a certain matrix. The mechanism for k-flows prunes the graph to be minimally (k + 1)-connected, and then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts contracts the graph until all s-t paths have length exactly 2, and then applies the Vertex Cover mechanism.
We propose a uniform approach for the design and analysis of prior-free competitive auctions and online auctions. Our philosophy is to view the benchmark function as a variable parameter of … We propose a uniform approach for the design and analysis of prior-free competitive auctions and online auctions. Our philosophy is to view the benchmark function as a variable parameter of the model and study a broad class of functions instead of a individual target benchmark. We consider a multitude of well-studied auction settings, and improve upon a few previous results. Multi-unit auctions. Given a β-competitive unlimited supply auction, the best previously known multi-unit auction is 2β-competitive. We design a (1+β)-competitive auction reducing the ratio from 4.84 to 3.24. These results carry over to matroid and position auctions. General downward-closed environments. We design a 6.5-competitive auction improving upon the ratio of 7.5. Our auction is noticeably simpler than the previous best one. Unlimited supply online auctions. Our analysis yields an auction with a competitive ratio of 4.12, which significantly narrows the margin of [4, 4.84] previously known for this problem.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, … We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, frugality [Archer&Tardos'02] provides a measure to evaluate the "cost of truthfulness", that is, the overpayment of a truthful mechanism relative to the "fair" payment. We propose a uniform scheme for designing frugal truthful mechanisms for general set systems. Our scheme is based on scaling the agents' bids using the eigenvector of a matrix that encodes the interdependencies between the agents. We demonstrate that the r-out-of-k-system mechanism and the \sqrt-mechanism for buying a path in a graph [Karlin et. al'05] can be viewed as instantiations of our scheme. We then apply our scheme to two other classes of set systems, namely, vertex cover systems and k-path systems, in which a customer needs to purchase k edge-disjoint source-sink paths. For both settings, we bound the frugality of our mechanism in terms of the largest eigenvalue of the respective interdependency matrix. We show that our mechanism is optimal for a large subclass of vertex cover systems satisfying a simple local sparsity condition. For k-path systems, while our mechanism is within a factor of k + 1 from optimal, we show that it is, in fact, optimal, when one uses a modified definition of frugality proposed in [Elkind et al.'07]. Our lower bound argument combines spectral techniques and Young's inequality, and is applicable to all set systems. As both r-out-of-k systems and single path systems can be viewed as special cases of k-path systems, our result improves the lower bounds of [Karlin et al.'05] and answers several open questions proposed in that paper.
In the partition problem we seek to partition a list of numbers into two sublists to minimize the difference between the sums of the two sublists. For this and the … In the partition problem we seek to partition a list of numbers into two sublists to minimize the difference between the sums of the two sublists. For this and the related subset sum problem, under suitable assumptions on the probability distributions of the input, it is known that the median of the optimum difference is exponentially small. In this paper we show (again, under suitable assumptions on the distribution) that the expectation of the difference is also exponentially small. © 1998 John Wiley & Sons, Inc. Random Struct. Alg., 12, 51–62, 1998
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
Contention resolution schemes have proven to be an incredibly powerful concept which allows tackling a broad class of problems. The framework has been initially designed to handle submodular optimization under … Contention resolution schemes have proven to be an incredibly powerful concept which allows tackling a broad class of problems. The framework has been initially designed to handle submodular optimization under various types of constraints, that is, intersections of exchange systems (including matroids), knapsacks, and unsplittable flows on trees. Later on, it turned out that this framework perfectly extends to optimization under uncertainty, like stochastic probing and online selection problems, which further can be applied to mechanism design. We add to this line of work by showing how to create contention resolution schemes for intersection of matroids and knapsacks when we work in the random order setting. More precisely, we do know the whole universe of elements in advance, but they appear in an order given by a random permutation. Upon arrival we need to irrevocably decide whether to take an element or not. We bring a novel technique for analyzing procedures in the random order setting that is based on the martingale theory. This unified approach makes it easier to combine constraints, and we do not need to rely on the monotonicity of contention resolution schemes, as it was the case before. Our paper fills the gaps, extends, and creates connections between many previous results and techniques. The main application of our framework is a k + 4 + ε approximation ratio for the Bayesian multi-parameter unit-demand mechanism design under the constraint of k matroids intersection, which improves upon the previous bounds of 4k - 2 and e(k + 1). Other results include improved approximation ratios for stochastic k-set packing and submodular stochastic probing over arbitrary nonnegative submodular objective function, whereas previous results required the objective to be monotone.
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and … Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and if $T_n$ is the set of stop rules for $X_1, \cdots, X_n$, then $E(\max\{X_1, \cdots, X_n\}) \leq a_n \sup\{EX_t: t \in T_n\}$, and the bound $a_n$ is best possible. Similar universal constants $0 < b_n < \frac{1}{4}$ are found so that if the $\{X_i\}$ are i.i.d. random variables taking values only in $\lbrack a, b\rbrack$, then $E(\max\{X_1, \cdots, X_n\}) \leq \sup\{EX_t: t \in T_n\} + b_n(b - a)$, where again the bound $b_n$ is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, Nick Gravin, and Pinyan Lupp.685 - 699Chapter DOI:https://doi.org/10.1137/1.9781611973082.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group. Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap.
Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate … Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
We consider a fundamental problem in microeconomics: Selling a single item among a number of buyers whose values are drawn from known independent and regular distributions. There are four widely-used … We consider a fundamental problem in microeconomics: Selling a single item among a number of buyers whose values are drawn from known independent and regular distributions. There are four widely-used and widely-studied mechanisms in this literature: Anonymous Posted-Pricing (AP), Second-Price Auction with Anonymous Reserve (AR), Sequential Posted-Pricing (SPM), and Myerson Auction (OPT). Myerson Auction is optimal but complicated, which also suffers a few issues in practice such as fairness; AP is the simplest mechanism, but its revenue is also the lowest among these four; AR and SPM are of intermediate complexity and revenue. We study the revenue gaps among these four mechanisms, which is defined as the largest ratio between revenues from two mechanisms. We establish two tight ratios and one tighter bound:1.SPM/AP. This ratio studies the power of discrimination in pricing schemes. We obtain the tight ratio of roughly 2.62, closing the previous known bounds [e/(e – 1), e].2.AR/AP. This ratio studies the relative power of auction vs. pricing schemes, when no discrimination is allowed. We get the tight ratio of π2/6 ≈ 1.64, closing the previous known bounds [e/(e – 1), e].3.OPT/AR. This ratio studies the power of discrimination in auctions. Previously, the revenue gap is known to be in interval [2, e], and the lower-bound of 2 is conjectured to be tight [38, 37, 4]. We disprove this conjecture by obtaining a better lower-bound of 2.15.
Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible … Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood.
In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team … In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team and pay as little as possible. Examples of this setting include shortest-path auctions and vertex-cover auctions. Recently, Karlin, Kempe and Tamir introduced a new definition of for this problem. Informally, the frugality ratio is the of the total payment of a mechanism to a desired payment bound. The captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction. In this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its ratio. We show that the solution quality is with a constant factor of optimal and the is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties. Moreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio. Also, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule. We study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions. We use these new definitions in the proof of our main result for vertex-cover auctions via a boot-strapping technique, which may be of independent interest.
We present randomized algorithms that solve subset sum and knapsack instances with $n$ items in $O^*(2^{0.86n})$ time, where the $O^*(\cdot)$ notation suppresses factors polynomial in the input size, and polynomial … We present randomized algorithms that solve subset sum and knapsack instances with $n$ items in $O^*(2^{0.86n})$ time, where the $O^*(\cdot)$ notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve binary integer programming on $n$ variables with few constraints in a similar running time. We also show that for any constant $k\geq 2$, random instances of $k$-sum can be solved using $O(n^{k-0.5}\mathrm{polylog}(n))$ time and $O(\log n)$ space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length $n$ with integers bounded by a polynomial in $n$ share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using $O(\log n)$ space significantly faster than the trivial $O(n^2)$ time algorithm if no value occurs too often in the same list.
Abstract We consider the problem of partitioning n randomly chosen integers between 1 and 2 m into two subsets such that the discrepancy, the absolute value of the difference of … Abstract We consider the problem of partitioning n randomly chosen integers between 1 and 2 m into two subsets such that the discrepancy, the absolute value of the difference of their sums, is minimized. A partition is called perfect if the optimum discrepancy is 0 when the sum of all n integers in the original set is even, or 1 when the sum is odd. Parameterizing the random problem in terms of κ= m / n , we prove that the problem has a phase transition at κ=1, in the sense that for κ&lt;1, there are many perfect partitions with probability tending to 1 as n →∞, whereas for κ&gt;1, there are no perfect partitions with probability tending to 1. Moreover, we show that this transition is first‐order in the sense the derivative of the so‐called entropy is discontinuous at κ=1. We also determine the finite‐size scaling window about the transition point: κ n =1−(2 n ) −1 log 2 n +λ n / n , by showing that the probability of a perfect partition tends to 1, 0, or some explicitly computable p (λ)∈(0, 1), depending on whether λ n tends to −∞, ∞, or λ∈(−∞, ∞), respectively. For λ n →−∞ fast enough, we show that the number of perfect partitions is Gaussian in the limit. For λ n →∞, we prove that with high probability the optimum partition is unique, and that the optimum discrepancy is Θ(2 ). Within the window, i.e., if |λ n | is bounded, we prove that the optimum discrepancy is bounded. Both for λ n →∞ and within the window, we find the limiting distribution of the (scaled) discrepancy. Finally, both for the integer partitioning problem and for the continuous partitioning problem, we find the joint distribution of the k smallest discrepancies above the scaling window. © 2001 John Wiley &amp; Sons, Inc. Random Struct. Alg., 19: 247–288, 2001
We study the exact time complexity of the Subset Sum problem. Our focus is on instances that lack additive structure in the sense that the sums one can form from … We study the exact time complexity of the Subset Sum problem. Our focus is on instances that lack additive structure in the sense that the sums one can form from the subsets of the given integers a ...
We consider the problem of selling a single item to one of $n$ bidders who arrive sequentially with values drawn independently from identical distributions, and ask how much more revenue … We consider the problem of selling a single item to one of $n$ bidders who arrive sequentially with values drawn independently from identical distributions, and ask how much more revenue can be obtained by posting discriminatory prices to individual bidders rather than the same anonymous price to all of them. The ratio between the maximum revenue from discriminatory pricing and that from anonymous pricing is at most $2-1/n$ for arbitrary distributions and at most $1/(1-(1-1/n)^n)\leq e/(e-1)\approx 1.582$ for regular distributions, and these bounds can in fact be obtained by using one of the discriminatory prices as an anonymous one. The bounds are shown via a relaxation of the discriminatory pricing problem rather than virtual values and thus apply to distributions without a density, and they are tight for all values of $n$. For a class of distributions that includes the uniform and the exponential distribution we show the maximization of revenue to be equivalent to the maximization of welfare with an additional bidder, in the sense that both use the same discriminatory prices. The problem of welfare maximization is the well-known Cayley-Moser problem, and this connection can be used to establish that the revenue gap between discriminatory and anonymous pricing is approximately $1.037$ for the uniform distribution and approximately $1.073$ for the exponential distribution.