Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (98)

The coordinate-wise median is a classic and most well-studied strategy-proof mechanism in social choice and facility location scenarios. Surprisingly, there is no systematic study of its approximation ratio in $d$-dimensional … The coordinate-wise median is a classic and most well-studied strategy-proof mechanism in social choice and facility location scenarios. Surprisingly, there is no systematic study of its approximation ratio in $d$-dimensional spaces. The best known approximation guarantee in $d$-dimensional Euclidean space $\mathbb{L}_2(\mathbb{R}^d)$ is $\sqrt{d}$ via embedding $\mathbb{L}_1(\mathbb{R}^d)$ into $\mathbb{L}_2(\mathbb{R}^d)$ metric space, that only appeared in appendix of [Meir 2019].This upper bound is known to be tight in dimension $d=2$, but there are no known super constant lower bounds. Still, it seems that the community's belief about coordinate-wise median is on the side of $\Theta(\sqrt{d})$. E.g., a few recent papers on mechanism design with predictions [Agrawal, Balkanski, Gkatzelis, Ou, Tan 2022], [Christodoulou, Sgouritsa, Vlachos 2024], and [Barak, Gupta, Talgam-Cohen 2024] directly rely on the $\sqrt{d}$-approximation result. In this paper, we systematically study approximate efficiency of the coordinate-median in $\mathbb{L}_{q}(\mathbb{R}^d)$ spaces for any $\mathbb{L}_q$ norm with $q\in[1,\infty]$ and any dimension $d$. We derive a series of constant upper bounds $UB(q)$ independent of the dimension $d$. This series $UB(q)$ is growing with parameter $q$, but never exceeds the constant $UB(\infty)= 3$. Our bound $UB(2)=\sqrt{6\sqrt{3}-8}<1.55$ for $\mathbb{L}_2$ norm is only slightly worse than the tight approximation guarantee of $\sqrt{2}>1.41$ in dimension $d=2$. Furthermore, we show that our upper bounds are essentially tight by giving almost matching lower bounds $LB(q,d)=UB(q)\cdot(1-O(1/d))$ for any dimension $d$ with $LB(q,d)=UB(q)$ when $d\to\infty$. We also extend our analysis to the generalized median mechanism in [Agrawal, Balkanski, Gkatzelis, Ou, Tan 2022] for $\mathbb{L}_2(\mathbb{R}^2)$ space to arbitrary dimensions $d$ with similar results.
We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid … We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid of girth $\geq k$ and a prophet inequality instance on that matroid whose optimal competitive ratio is $\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a prophet inequality with competitive ratio $1-O(\sqrt{\frac{\log k}{k}})$ for any $k$-fold matroid union. Our prophet inequality follows from an online contention resolution scheme. The key technical ingredient in our online contention resolution scheme is a novel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz functions over independent items which may be of independent interest. Applied to our particular setting, our bicriterion concentration inequality yields "Chernoff-strength" concentration for a $1$-Lipschitz function that is not (approximately) self-bounding.
The problem of identifying the satisfiability threshold of random $3$-SAT formulas has received a lot of attention during the last decades and has inspired the study of other threshold phenomena … The problem of identifying the satisfiability threshold of random $3$-SAT formulas has received a lot of attention during the last decades and has inspired the study of other threshold phenomena in random combinatorial structures. The classical assumption in this line of research is that, for a given set of $n$ Boolean variables, each clause is drawn uniformly at random among all sets of three literals from these variables, independently from other clauses. Here, we keep the uniform distribution of each clause, but deviate significantly from the independence assumption and consider richer families of probability distributions. For integer parameters $n$, $m$, and $k$, we denote by $\DistFamily_k(n,m)$ the family of probability distributions that produce formulas with $m$ clauses, each selected uniformly at random from all sets of three literals from the $n$ variables, so that the clauses are $k$-wise independent. Our aim is to make general statements about the satisfiability or unsatisfiability of formulas produced by distributions in $\DistFamily_k(n,m)$ for different values of the parameters $n$, $m$, and $k$.
This paper reexamines the classic problem of revenue maximization in single-item auctions with $n$ buyers under the lens of the robust optimization framework. The celebrated Myerson's mechanism is the format … This paper reexamines the classic problem of revenue maximization in single-item auctions with $n$ buyers under the lens of the robust optimization framework. The celebrated Myerson's mechanism is the format that maximizes the seller's revenue under the prior distribution, which is mutually independent across all $n$ buyers. As argued in a recent line of work (Caragiannis et al. 22), (Dughmi et al. 24), mutual independence is a strong assumption that is extremely hard to verify statistically, thus it is important to relax the assumption. While optimal under mutual independent prior, we find that Myerson's mechanism may lose almost all of its revenue when the independence assumption is relaxed to pairwise independence, i.e., Myerson's mechanism is not pairwise-robust. The mechanism regains robustness when the prior is assumed to be 3-wise independent. In contrast, we show that second-price auctions with anonymous reserve, including optimal auctions under i.i.d. priors, lose at most a constant fraction of their revenues on any regular pairwise independent prior. Our findings draw a comprehensive picture of robustness to $k$-wise independence in single-item auction settings.
In the Bidder Selection Problem (BSP) there is a large pool of n potential advertisers competing for ad slots on the user's web page. Due to strict computational restrictions, the … In the Bidder Selection Problem (BSP) there is a large pool of n potential advertisers competing for ad slots on the user's web page. Due to strict computational restrictions, the advertising platform can run a proper auction only for a fraction k<n of advertisers. We consider the basic optimization problem underlying BSP: given n independent prior distributions, how to efficiently find a subset of k with the objective of either maximizing expected social welfare or revenue of the platform. We study BSP in the classic multi-winner model of position auctions for welfare and revenue objectives using the optimal (respectively, VCG mechanism, or Myerson's auction) format for the selected set of bidders. This is a natural generalization of the fundamental problem of selecting k out of n random variables in a way that the expected highest value is maximized. Previous PTAS results ([Chen, Hu, Li, Li, Liu, Lu, NIPS 2016], [Mehta, Nadav, Psomas, Rubinstein, NIPS 2020], [Segev and Singla, EC 2021]) for BSP optimization were only known for single-item auctions and in case of [Segev and Singla 2021] for l-unit auctions. More importantly, all of these PTASes were computational complexity results with impractically large running times, which defeats the purpose of using these algorithms under severe computational constraints.
We consider ordinal online problems, i.e., tasks that only require pairwise comparisons between elements of the input. A classic example is the secretary problem and the game of googol, as … We consider ordinal online problems, i.e., tasks that only require pairwise comparisons between elements of the input. A classic example is the secretary problem and the game of googol, as well as its multiple combinatorial extensions such as $(J, K)$-secretary, 2-sided game of googol, ordinal-competitive matroid secretary. A natural approach to these tasks is to use ordinal online algorithms that at each step only consider relative ranking among the arrived elements, without looking at the numerical values of the input. We formally study the question of how cardinal algorithms (that can use numerical values of the input) can improve upon ordinal algorithms. We give first a universal construction of the input distribution for any ordinal online problem, such that the advantage of any cardinal algorithm over the ordinal algorithms is at most $1+\varepsilon$ for arbitrary small $\varepsilon\gt 0$. This implies that lower bounds from [Buchbinder, Jain, Singh, MOR 2014], [Nuti and Vondrák, SODA 2023] hold not only against any ordinal algorithm, but also against any online algorithm. Another immediate corollary is that cardinal algorithms are no better than ordinal algorithms in the matroid secretary problem with ordinal-competitive objective of [Soto, Turkieltaub, Verdugo, MOR 2021]. However, the value range of the input elements in our construction is huge: $N=$ $O\left(\frac{n^{3} \cdot n ! \cdot n !}{\varepsilon}\right) \uparrow \uparrow(n-1)$ (tower of exponents) for an input sequence of length n. As a second result, we identify a class of natural ordinal problems and find cardinal algorithm with a matching advantage of $1+\Omega\left(\frac{1}{\log (c) N}\right)$, where $\log^{(c)} N=\log \log \ldots \log N$ with c iterative logs and c is an arbitrary constant $c \leq n-2$. This suggests that for relatively small input numerical values N the cardinal algorithms may be significantly better than the ordinal algorithms on the ordinal tasks, which are typically assumed to be almost indistinguishable prior to our work. This observation leads to a natural complexity measure (we dub it cardinal complexity) for any given ordinal online task: the minimum size $N(\varepsilon)$ of different numerical values in the input such the advantage of cardinal over ordinal algorithms is at most $1+\varepsilon$ for any given $\varepsilon\gt 0$. As a third result, we show that the game of googol has much lower cardinal complexity of $N=O\left(\left(\frac{n}{\varepsilon}\right)^{n}\right)$.
We introduce a new measure for the performance of online algorithms in Bayesian settings, where the input is drawn from a known prior, but the realizations are revealed one-by-one in … We introduce a new measure for the performance of online algorithms in Bayesian settings, where the input is drawn from a known prior, but the realizations are revealed one-by-one in an online fashion. Our new measure is called order-competitive ratio. It is defined as the worst case (over all distribution sequences) ratio between the performance of the best order-unaware and order-aware algorithms, and quantifies the loss that is incurred due to lack of knowledge of the arrival order. Despite the growing interest in the role of the arrival order on the performance of online algorithms, this loss has been overlooked thus far.We study the order-competitive ratio in the paradigmatic prophet inequality problem, for the two common objective functions of (i) maximizing the expected value, and (ii) maximizing the probability of obtaining the largest value; and with respect to two families of algorithms, namely (i) adaptive algorithms, and (ii) single-threshold algorithms. We provide tight bounds for all four combinations, with respect to deterministic algorithms. Our analysis requires new ideas and departs from standard techniques. In particular, our adaptive algorithms inevitably go beyond single-threshold algorithms. The results with respect to the order-competitive ratio measure capture the intuition that adaptive algorithms are stronger than single-threshold ones, and may lead to a better algorithmic advice than the classical competitive ratio measure.* This work is supported by Science and Technology Innovation 2030 –"New Generation of Artificial Intelligence" Major Project No.(2018AAA0100903), Innovation Program of Shanghai Municipal Education Commission, Program for Innovative Research Team of Shanghai University of Finance and Economics (IRTSHUFE) and the Fundamental Research Funds for the Central Universities. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 866132), by the Israel Science Foundation (grant number 317/17), by an Amazon Research Award, and by the NSF-BSF (grant number 2020788). Tomer Ezra was partially supported by the ERC Advanced Grant 788893 AMDROMA "Algorithmic and Mechanism Design Research in Online Markets" and MIUR PRIN project ALGADIMAR "Algorithms, Games, and Digital Markets". Zhihao Gavin Tang is supported by NSFC grant 61902233. Nick Gravin is supported by NSFC grant 62150610500.
Motivated by practical concerns in the online advertising industry, we study a bidder subset selection problem in single-item auctions. In this problem, a large pool of candidate bidders have independent … Motivated by practical concerns in the online advertising industry, we study a bidder subset selection problem in single-item auctions. In this problem, a large pool of candidate bidders have independent values sampled from known prior distributions. The seller needs to pick a subset of bidders and run a given auction format on the selected subset to maximize her expected revenue. We propose two frameworks for the subset restrictions: (i) capacity constraint on the set of selected bidders; and (ii) incurred costs for the bidders invited to the auction. For the second-price auction with anonymous reserve (SPA-AR), we give constant approximation polynomial time algorithms in both frameworks (in the latter framework under mild assumptions about the market). Our results are in stark contrast to the previous work of Mehta, Nadav, Psomas, Rubinstein [NeurIPS 2020], who showed hardness of approximation for the SPA without a reserve price. We also give complimentary approximation results for other well-studied auction formats such as anonymous posted pricing and sequential posted pricing. On a technical level, we find that the revenue of SPA-AR as a set function f(S) of its bidders S is fractionally-subadditive but not submodular. Our bidder selection problem with invitation costs is a natural question about (approximately) answering a demand oracle for f(·) under a given vector of costs, a common computational assumption in the literature on combinatorial auctions.* This work is supported by Science and Technology Innovation 2030 –"New Generation of Artificial Intelligence" Major Project No.(2018AAA0100903), Innovation Program of Shanghai Municipal Education Commission, Program for Innovative Research Team of Shanghai University of Finance and Economics (IRTSHUFE) and the Fundamental Research Funds for the Central Universities. Zhihao Gavin Tang is supported by NSFC grant 61902233. Nick Gravin is supported by NSFC grant 62150610500.
In the Bidder Selection Problem (BSP) there is a large pool of $n$ potential advertisers competing for ad slots on the user's web page. Due to strict computational restrictions, the … In the Bidder Selection Problem (BSP) there is a large pool of $n$ potential advertisers competing for ad slots on the user's web page. Due to strict computational restrictions, the advertising platform can run a proper auction only for a fraction $k<n$ of advertisers. We consider the basic optimization problem underlying BSP: given $n$ independent prior distributions, how to efficiently find a subset of $k$ with the objective of either maximizing expected social welfare or revenue of the platform. We study BSP in the classic multi-winner model of position auctions for welfare and revenue objectives using the optimal (respectively, VCG mechanism, or Myerson's auction) format for the selected set of bidders. Previous PTAS results for BSP optimization were only known for single-item auctions and in case of [Segev and Singla 2021] for $l$-unit auctions. More importantly, all of these PTASes were computational complexity results with impractically large running times, which defeats the purpose of using these algorithms under severe computational constraints. We propose a novel Poisson relaxation of BSP for position auctions that immediately implies that 1) BSP is polynomial-time solvable up to a vanishingly small error as the problem size $k$ grows; 2) there is a PTAS for position auctions after combining our relaxation with the trivial brute force algorithm. Unlike all previous PTASes, we implemented our algorithm and did extensive numerical experiments on practically relevant input sizes. First, our experiments corroborate the previous experimental findings of Mehta et al. that a few simple heuristics used in practice perform surprisingly well in terms of approximation factor. Furthermore, our algorithm outperforms Greedy both in running time and approximation on medium and large-sized instances.
In a single-parameter mechanism design problem, a provider is looking to sell some service to a group of potential buyers. Each buyer i has a private value vi for receiving … In a single-parameter mechanism design problem, a provider is looking to sell some service to a group of potential buyers. Each buyer i has a private value vi for receiving this service, and some feasibility constraint restricts which subsets of buyers can be served simultaneously. Recent work in economics introduced (deferred-acceptance) clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and very strong incentive guarantees. Subsequent work in computer science focused on evaluating these auctions with respect to their social welfare approximation guarantees, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets.
We consider the Bayesian online selection problem of a matching in bipartite graphs, that is, the weighted online matching problem where the edges arrive online and edge weights are generated … We consider the Bayesian online selection problem of a matching in bipartite graphs, that is, the weighted online matching problem where the edges arrive online and edge weights are generated from a known distribution. This setting corresponds to the intersection of two matroids in the work of Kleinberg and Weinberg [ 40 ] and Feldman et al. [ 27 ]. We study a simple class of nonadaptive policies that we call vertex-additive policies. A vertex-additive policy assigns static prices to every vertex in the graph and accepts only those edges whose weight exceeds the sum of the prices on the edge endpoints. We show that there exists a vertex-additive policy with the expected payoff of at least one-third of the prophet’s payoff and present a gradient descent algorithm that quickly converges to the desired vector of vertex prices. Our results improve on the adaptive online policies of Kleinberg and Weinberg and Feldman et al. for the intersection of two matroids in two ways: our policy is nonadaptive and has a better approximation guarantee of 3 instead of the previous guarantees of 5.82 in Kleinberg and Weinberg and 5.43 in Feldman et al. We give a complementary lower bound of 2.25 for any online algorithm in the bipartite matching setting. Funding: This work was supported by Fundamental Research Funds of the Central Universities in China, a Science and Technology Innovation 2030 major project [“New Generation of Artificial Intelligence,” Project 2018AAA0100903], the Shanghai Municipal Education Commission Innovation Program, and the Program for Innovative Research Team of the Shanghai University of Finance and Economics (IRTSHUFE).
We introduce a new measure for the performance of online algorithms in Bayesian settings, where the input is drawn from a known prior, but the realizations are revealed one-by-one in … We introduce a new measure for the performance of online algorithms in Bayesian settings, where the input is drawn from a known prior, but the realizations are revealed one-by-one in an online fashion. Our new measure is called order-competitive ratio. It is defined as the worst case (over all distribution sequences) ratio between the performance of the best order-unaware and order-aware algorithms, and quantifies the loss that is incurred due to lack of knowledge of the arrival order. Despite the growing interest in the role of the arrival order on the performance of online algorithms, this loss has been overlooked thus far. We study the order-competitive ratio in the paradigmatic prophet inequality problem, for the two common objective functions of (i) maximizing the expected value, and (ii) maximizing the probability of obtaining the largest value; and with respect to two families of algorithms, namely (i) adaptive algorithms, and (ii) single-threshold algorithms. We provide tight bounds for all four combinations, with respect to deterministic algorithms. Our analysis requires new ideas and departs from standard techniques. In particular, our adaptive algorithms inevitably go beyond single-threshold algorithms. The results with respect to the order-competitive ratio measure capture the intuition that adaptive algorithms are stronger than single-threshold ones, and may lead to a better algorithmic advice than the classical competitive ratio measure.
We consider ordinal online problems, i.e., tasks that only require pairwise comparisons between elements of the input. A classic example is the secretary problem and the game of googol, as … We consider ordinal online problems, i.e., tasks that only require pairwise comparisons between elements of the input. A classic example is the secretary problem and the game of googol, as well as its multiple combinatorial extensions such as $(J,K)$-secretary, $2$-sided game of googol, ordinal-competitive matroid secretary. A natural approach to these tasks is to use ordinal algorithms that at each step only consider relative ranking among the arrived elements, without looking at the numerical values of the input. We formally study the question of how cardinal algorithms can improve upon ordinal algorithms. We give first a universal construction of the input distribution for any ordinal online problem, such that the advantage of any cardinal algorithm over the ordinal algorithms is at most $1+\varepsilon$ for arbitrary small $\varepsilon> 0$. As an implication, previous lower bounds for the aforementioned variants of secretary problems hold not only against ordinal algorithms, but also against any online algorithm. However, the value range of the input elements in our construction is huge: $N=O\left(\frac{n^3\cdot n!\cdot n!}{\varepsilon}\right)\uparrow\uparrow(n-1)$ (tower of exponents) for an input sequence of length $n$. As a second result, we identify a class of natural ordinal problems and find cardinal algorithm with a matching advantage of $1+ \Omega \left(\frac{1}{\log^{(c)}N}\right),$ where $\log^{(c)}N=\log\ldots\log N$ with $c$ iterative logs and $c$ is an arbitrary constant. Further, we introduce the cardinal complexity for any given ordinal online task: the minimum size $N(\varepsilon)$ of different numerical values in the input such the advantage of cardinal over ordinal algorithms is at most $1+\varepsilon$. As a third result, we show that the game of googol has much lower cardinal complexity of $N=O\left(\left(\frac{n}{\varepsilon}\right)^n\right)$.
In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving … In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving the service and a feasibility constraint restricts which sets of buyers can be served simultaneously. Recent work in economics introduced clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and strong incentive guarantees. Subsequent work focused on evaluating the social welfare approximation guarantees of these auctions, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets. We show that these negative results can be circumvented by using prior information or by leveraging randomization. We provide clock auctions that give a $O(\log\log k)$ approximation for general downward-closed feasibility constraints with $k$ maximal feasible sets for three different information models, ranging from full access to the value distributions to complete absence of information. The more information the seller has, the simpler these auctions are. Under full access, we use a particularly simple deterministic clock auction, called a single-price clock auction, which is only slightly more complex than posted price mechanisms. In this auction, each buyer is offered a single price and a feasible set is selected among those who accept their offers. In the other extreme, where no prior information is available, this approximation guarantee is obtained using a complex randomized clock auction. In addition to our main results, we propose a parameterization that interpolates between single-price clock auctions and general clock auctions, paving the way for an exciting line of future research.
Motivated by practical concerns in the online advertising industry, we study a bidder subset selection problem in single-item auctions. In this problem, a large pool of candidate bidders have independent … Motivated by practical concerns in the online advertising industry, we study a bidder subset selection problem in single-item auctions. In this problem, a large pool of candidate bidders have independent values sampled from known prior distributions. The seller needs to pick a subset of bidders and run a given auction format on the selected subset to maximize her expected revenue. We propose two frameworks for the subset restrictions: (i) capacity constraint on the set of selected bidders; and (ii) incurred costs for the bidders invited to the auction. For the second-price auction with anonymous reserve (SPA-AR), we give constant approximation polynomial time algorithms in both frameworks (in the latter framework under mild assumptions about the market). Our results are in stark contrast to the previous work of Mehta, Nadav, Psomas, Rubinstein [NeurIPS 2020], who showed hardness of approximation for the SPA without a reserve price. We also give complimentary approximation results for other well-studied auction formats such as anonymous posted pricing and sequential posted pricing. On a technical level, we find that the revenue of SPA-AR as a set function $f(S)$ of its bidders $S$ is fractionally-subadditive but not submodular. Our bidder selection problem with invitation costs is a natural question about (approximately) answering a demand oracle for $f(\cdot)$ under a given vector of costs, a common computational assumption in the literature on combinatorial auctions.
We study a classic Bayesian mechanism design setting of monopoly problem for an additive buyer in the presence of budgets. In this setting, a monopolist seller with m heterogeneous items … We study a classic Bayesian mechanism design setting of monopoly problem for an additive buyer in the presence of budgets. In this setting, a monopolist seller with m heterogeneous items faces a single buyer and seeks to maximize her revenue. The buyer has a budget and additive valuations drawn independently for each item from (non-identical) distributions. We show that when the buyer’s budget is publicly known, it is better to sell each item separately; selling the grand bundle extracts a constant fraction of the optimal revenue. When the budget is private, we consider a standard Bayesian setting where buyer’s budget b is drawn from a known distribution B . We show that if b is independent of the valuations (which is necessary) and distribution B satisfies monotone hazard rate condition, then selling items separately or in a grand bundle is still approximately optimal.
In this article, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of 2, improving the … In this article, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of 2, improving the previous best known result of 3. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of 3 against the fractional optimum, improving the best known result of (2+ √ 2) for the weaker integral benchmark.
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is … We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is drawn independently from an a-priori known probability distribution. Under edge arrival, the weight of each edge is revealed upon arrival, and the algorithm decides whether to include it in the matching or not. Under vertex arrival, the weights of all edges from the newly arriving vertex to all previously arrived vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. To study these settings, we introduce a novel unified framework of batched prophet inequalities that captures online settings where elements arrive in batches; in particular it captures matching under the two aforementioned arrival models. Our algorithms rely on the construction of suitable online contention resolution schemes (OCRS). We first extend the framework of OCRS to batched-OCRS, we then establish a reduction from batched prophet inequality to batched OCRS, and finally we construct batched OCRSs with selectable ratios of 0.337 and 0.5 for edge and vertex arrival models, respectively. Both results improve the state of the art for the corresponding settings. For vertex arrival, our result is tight. Interestingly, pricing-based prophet inequalities with comparable competitive ratios are unknown.
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is … We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is drawn independently from an a-priori known probability distribution. Under edge arrival, the weight of each edge is revealed upon arrival, and the algorithm decides whether to include it in the matching or not. Under vertex arrival, the weights of all edges from the newly arriving vertex to all previously arrived vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. To study these settings, we introduce a novel unified framework of batched prophet inequalities that captures online settings where elements arrive in batches; in particular it captures matching under the two aforementioned arrival models. Our algorithms rely on the construction of suitable online contention resolution scheme (OCRS). We first extend the framework of OCRS to batched-OCRS, we then establish a reduction from batched prophet inequality to batched OCRS, and finally we construct batched OCRSs with selectable ratios of 0.337 and 0.5 for edge and vertex arrival models, respectively. Both results improve the state of the art for the corresponding settings. For the vertex arrival, our result is tight. Interestingly, a pricing-based prophet inequality with comparable competitive ratios is unknown.
We provide online algorithms for secretary matching in general weighted graphs, under the well-studied models of vertex and edge arrivals. In both models, edges are associated with arbitrary weights that … We provide online algorithms for secretary matching in general weighted graphs, under the well-studied models of vertex and edge arrivals. In both models, edges are associated with arbitrary weights that are unknown from the outset, and are revealed online. Under vertex arrival, vertices arrive online in a uniformly random order; upon the arrival of a vertex $v$, the weights of edges from $v$ to all previously arriving vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. Under edge arrival, edges arrive online in a uniformly random order; upon the arrival of an edge $e$, its weight is revealed, and the algorithm decides whether to include it in the matching or not. We provide a $5/12$-competitive algorithm for vertex arrival, and show it is tight. For edge arrival, we provide a $1/4$-competitive algorithm. Both results improve upon state of the art bounds for the corresponding settings. Interestingly, for vertex arrival, secretary matching in general graphs outperforms secretary matching in bipartite graphs with 1-sided arrival, where $1/e$ is the best possible guarantee.
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is … We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is drawn independently from an a-priori known probability distribution. Under edge arrival, the weight of each edge is revealed upon arrival, and the algorithm decides whether to include it in the matching or not. Under vertex arrival, the weights of all edges from the newly arriving vertex to all previously arrived vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. To study these settings, we introduce a novel unified framework of batched prophet inequalities that captures online settings where elements arrive in batches; in particular it captures matching under the two aforementioned arrival models. Our algorithms rely on the construction of suitable online contention resolution scheme (OCRS). We first extend the framework of OCRS to batched-OCRS, we then establish a reduction from batched prophet inequality to batched OCRS, and finally we construct batched OCRSs with selectable ratios of 0.337 and 0.5 for edge and vertex arrival models, respectively. Both results improve the state of the art for the corresponding settings. For the vertex arrival, our result is tight. Interestingly, a pricing-based prophet inequality with comparable competitive ratios is unknown.
In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving … In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(\sqrt2 + 2)$ against the weaker integral benchmark.
Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably … Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. To the best of our knowledge, this is the first use of large market assumptions in the fair division literature. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most $2\rho$, using a $\rho$-approximate allocation on input instead of the maximum Nash welfare one.
In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the … In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark, or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(2+ \sqrt{2})$ for the weaker integral benchmark.
Online bipartite matching with edge arrivals remained a major open question for a long time until a recent negative result by [Gamlath et al. FOCS 2019], who showed that no … Online bipartite matching with edge arrivals remained a major open question for a long time until a recent negative result by [Gamlath et al. FOCS 2019], who showed that no online policy is better than the straightforward greedy algorithm, i.e., no online algorithm has a worst-case competitive ratio better than $0.5$. In this work, we consider the bipartite matching problem with edge arrivals in a natural stochastic framework, i.e., Bayesian setting where each edge of the graph is independently realized according to a known probability distribution. We focus on a natural class of prune & greedy online policies motivated by practical considerations from a multitude of online matching platforms. Any prune & greedy algorithm consists of two stages: first, it decreases the probabilities of some edges in the stochastic instance and then runs greedy algorithm on the pruned graph. We propose prune & greedy algorithms that are $0.552$-competitive on the instances that can be pruned to a $2$-regular stochastic bipartite graph, and $0.503$-competitive on arbitrary bipartite graphs. The algorithms and our analysis significantly deviate from the prior work. We first obtain analytically manageable lower bound on the size of the matching, which leads to a non linear optimization problem. We further reduce this problem to a continuous optimization with a constant number of parameters that can be solved using standard software tools.
Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably … Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. To the best of our knowledge, this is the first use of large market assumptions in the fair division literature. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most $2\rho$, using a $\rho$-approximate allocation on input instead of the maximum Nash welfare one.
We consider Bayesian online selection problem of a matching in bipartite graphs, i.e., online weighted matching problem with edge arrivals where online algorithm knows distributions of weights, that corresponds to … We consider Bayesian online selection problem of a matching in bipartite graphs, i.e., online weighted matching problem with edge arrivals where online algorithm knows distributions of weights, that corresponds to the intersection of two matroids in [Kleinberg and Wienberg STOC 12] model. We consider a simple class of non adaptive vertex-additive policies that assign static prices to all vertices in the graph and accept each edge only if its weight exceeds the sum of the prices of the edge's endpoints. We show existence of a vertex-additive policy with the expected payoff of at least one third of the prophet's payoff and present gradient decent type algorithm that quickly converges to the desired vector of vertex prices. This improves the adaptive online policy of [Kleinberg and Wienberg STOC 12] for the intersection of two matroids in two ways: our policy is non adaptive and has better approximation guarantee of $3$ instead of previous guarantee of $5.82$ against the prophet. We give a complementary lower bound of $2.25$ for any online algorithm in the bipartite matching setting.
We show that the multivariate generating function of appropriately normalized moments of a measure with homogeneous polynomial density supported on a compact polytope $$\mathcal {P}\subset \mathbb {R}^d$$ is a rational … We show that the multivariate generating function of appropriately normalized moments of a measure with homogeneous polynomial density supported on a compact polytope $$\mathcal {P}\subset \mathbb {R}^d$$ is a rational function. Its denominator is the product of linear forms dual to the vertices of $$\mathcal {P}$$ raised to the power equal to the degree of the density function. Using this, we solve the inverse moment problem for the set of, not necessarily convex, polytopes having a given set S of vertices. Under a weak non-degeneracy assumption we also show that the uniform measure supported on any such polytope is a linear combination of uniform measures supported on simplices with vertices in S.
In many shopping scenarios, e.g., in online shopping, customers have a large menu of options to choose from. However, most of the buyers do not browse all the options and … In many shopping scenarios, e.g., in online shopping, customers have a large menu of options to choose from. However, most of the buyers do not browse all the options and make decision after considering only a small part of the menu. To study such buyer's behavior we consider the standard Bayesian monopoly problem for a unit-demand buyer, where the monopolist displays the menu dynamically page after a page to the buyer. The seller aims to maximize the expected revenue over the distribution of buyer's values which we assume are i.i.d. The buyer incurs a fixed cost for browsing through one menu page and would stop if that cost exceeds the increase in her utility. We observe that the optimal posted price mechanism in our dynamic setting may have quite different structure than in the classic static scenario. We find a (relatively) simple and approximately optimal mechanism, that uses part of the items as a "bait" to keep the buyer interested for multiple rounds with low prices, while at the same time showing many other expensive items.
We study a classic Bayesian mechanism design setting of monopoly problem for an additive buyer in the presence of budgets. In this setting a monopolist seller with $m$ heterogeneous items … We study a classic Bayesian mechanism design setting of monopoly problem for an additive buyer in the presence of budgets. In this setting a monopolist seller with $m$ heterogeneous items faces a single buyer and seeks to maximize her revenue. The buyer has a budget and additive valuations drawn independently for each item from (non-identical) distributions. We show that when the buyer's budget is publicly known, the better of selling each item separately and selling the grand bundle extracts a constant fraction of the optimal revenue. When the budget is private, we consider a standard Bayesian setting where buyer's budget $b$ is drawn from a known distribution $B$. We show that if $b$ is independent of the valuations and distribution $B$ satisfies monotone hazard rate condition, then selling items separately or in a grand bundle is still approximately optimal. We give a complementary example showing that no constant approximation simple mechanism is possible if budget $b$ can be interdependent with valuations.
Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of … Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n.
Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of … Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n.
Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight … Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution. Following Kleinberg and Oren (2014), we use a weighted {\it task graph} to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path.
Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight … Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution. Following Kleinberg and Oren (2014), we use a weighted task graph to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path.
We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept … We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept for such problems is Walrasian equilibrium (WE), which provides a simple and transparent pricing structure that achieves optimal social welfare. The main weakness of the WE notion is that it exists only in very restrictive cases. To overcome this limitation, we introduce the notion of a combinatorial Walrasian equilibium (CWE), a natural relaxation of WE. The difference between a CWE and a (noncombinatorial) WE is that the seller can package the items into indivisible bundles prior to sale, and the market does not necessarily clear. We show that every valuation profile admits a CWE that obtains at least half the optimal (unconstrained) social welfare. Moreover, we devise a polynomial time algorithm that, given an arbitrary allocation, computes a CWE that achieves at least half its welfare. Thus, the economic problem of finding a CWE with high social welfare reduces to the algorithmic problem of social-welfare approximation. In addition, we show that every valuation profile admits a CWE that extracts a logarithmic fraction of the optimal welfare as revenue. Finally, to motivate the use of bundles, we establish strong lower bounds when the seller is restricted to using item prices only. The strength of our results derives partly from their generality---our results hold for arbitrary valuations that may exhibit complex combinations of substitutes and complements.
We study the fundamental problem of prediction with expert advice and develop regret lower bounds for a large family of algorithms for this problem. We develop simple adversarial primitives, that … We study the fundamental problem of prediction with expert advice and develop regret lower bounds for a large family of algorithms for this problem. We develop simple adversarial primitives, that lend themselves to various combinations leading to sharp lower bounds for many algorithmic families. We use these primitives to show that the classic Multiplicative Weights Algorithm (MWA) has a regret of $\sqrt{\frac{T \ln k}{2}}$, there by completely closing the gap between upper and lower bounds. We further show a regret lower bound of $\frac{2}{3}\sqrt{\frac{T\ln k}{2}}$ for a much more general family of algorithms than MWA, where the learning rate can be arbitrarily varied over time, or even picked from arbitrary distributions over time. We also use our primitives to construct adversaries in the geometric horizon setting for MWA to precisely characterize the regret at $\frac{0.391}{\sqrtδ}$ for the case of $2$ experts and a lower bound of $\frac{1}{2}\sqrt{\frac{\ln k}{2δ}}$ for the case of arbitrary number of experts $k$.
Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight … Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution. Following Kleinberg and Oren (2014), we use a weighted task graph to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path.
We study the classical problem of prediction with expert advice in the adversarial setting with a geometric stopping time. In 1965, Cover gave the optimal algorithm for the case of … We study the classical problem of prediction with expert advice in the adversarial setting with a geometric stopping time. In 1965, Cover gave the optimal algorithm for the case of 2 experts. In this paper, we design the optimal algorithm, adversary and regret for the case of 3 experts. Further, we show that the optimal algorithm for 2 and 3 experts is a probability matching algorithm (analogous to Thompson sampling) against a particular randomized adversary. Remarkably, our proof shows that the probability matching algorithm is not only optimal against this particular randomized adversary, but also minimax optimal.Our analysis develops upper and lower bounds simultaneously, analogous to the primal-dual method. Our analysis of the optimal adversary goes through delicate asymptotics of the random walk of a particle between multiple walls. We use the connection we develop to random walks to derive an improved algorithm and regret bound for the case of 4 experts, and, provide a general framework for designing the optimal algorithm and adversary for an arbitrary number of experts.
We study the power of item-pricing as a tool for approximately optimizing social welfare in a combinatorial market. We consider markets with $m$ indivisible items and $n$ buyers. The goal … We study the power of item-pricing as a tool for approximately optimizing social welfare in a combinatorial market. We consider markets with $m$ indivisible items and $n$ buyers. The goal is to set prices to the items so that, when agents purchase their most demanded sets simultaneously, no conflicts arise and the obtained allocation has nearly optimal welfare. For gross substitutes valuations, it is well known that it is possible to achieve optimal welfare in this manner. We ask: can one achieve approximately efficient outcomes for valuations beyond gross substitutes? We show that even for submodular valuations, and even with only two buyers, one cannot guarantee an approximation better than $\Omega(\sqrt{m})$. The same lower bound holds for the class of single-minded buyers as well. Beyond the negative results on welfare approximation, our results have daunting implications on revenue approximation for these valuation classes: in order to obtain good approximation to the collected revenue, one would necessarily need to abandon the common approach of comparing the revenue to the optimal welfare; a fundamentally new approach would be required.
Incorporating budget constraints into the analysis of auctions has become increasingly important, as they model practical settings more accurately. The social welfare function, which is the standard measure of efficiency … Incorporating budget constraints into the analysis of auctions has become increasingly important, as they model practical settings more accurately. The social welfare function, which is the standard measure of efficiency in auctions, is inadequate for settings with budgets, since there may be a large disconnect between the value a bidder derives from obtaining an item and what can be liquidated from her. The Liquid Welfare objective function has been suggested as a natural alternative for settings with budgets. Simple auctions, like simultaneous item auctions, are evaluated by their performance at equilibrium using the Price of Anarchy (PoA) measure -- the ratio of the objective function value of the optimal outcome to the worst equilibrium. Accordingly, we evaluate the performance of simultaneous item auctions in budgeted settings by the Liquid Price of Anarchy (LPoA) measure -- the ratio of the optimal Liquid Welfare to the Liquid Welfare obtained in the worst equilibrium. Our main result is that the LPoA for mixed Nash equilibria is bounded by a constant when bidders are additive and items can be divided into sufficiently many discrete parts. Our proofs are robust, and can be extended to achieve similar bounds for simultaneous second price auctions as well as Bayesian Nash equilibria. For pure Nash equilibria, we establish tight bounds on the LPoA for the larger class of fractionally-subadditive valuations. To derive our results, we develop a new technique in which some bidders deviate (surprisingly) toward a non-optimal solution. In particular, this technique does not fit into the smoothness framework.
We propose a uniform approach for the design and analysis of prior-free competitive auctions and online auctions. Our philosophy is to view the benchmark function as a variable parameter of … We propose a uniform approach for the design and analysis of prior-free competitive auctions and online auctions. Our philosophy is to view the benchmark function as a variable parameter of the model and study a broad class of functions instead of a individual target benchmark. We consider a multitude of well-studied auction settings, and improve upon a few previous results. Multi-unit auctions. Given a β-competitive unlimited supply auction, the best previously known multi-unit auction is 2β-competitive. We design a (1+β)-competitive auction reducing the ratio from 4.84 to 3.24. These results carry over to matroid and position auctions. General downward-closed environments. We design a 6.5-competitive auction improving upon the ratio of 7.5. Our auction is noticeably simpler than the previous best one. Unlimited supply online auctions. Our analysis yields an auction with a competitive ratio of 4.12, which significantly narrows the margin of [4, 4.84] previously known for this problem.
Incorporating budget constraints into the analysis of auctions has become increasingly important, as they model practical settings more accurately. The social welfare function, which is the standard measure of efficiency … Incorporating budget constraints into the analysis of auctions has become increasingly important, as they model practical settings more accurately. The social welfare function, which is the standard measure of efficiency in auctions, is inadequate for settings with budgets, since there may be a large disconnect between the value a bidder derives from obtaining an item and what can be liquidated from her. The Liquid Welfare objective function has been suggested as a natural alternative for settings with budgets. Simple auctions, like simultaneous item auctions, are evaluated by their performance at equilibrium using the Price of Anarchy (PoA) measure -- the ratio of the objective function value of the optimal outcome to the worst equilibrium. Accordingly, we evaluate the performance of simultaneous item auctions in budgeted settings by the Liquid Price of Anarchy (LPoA) measure -- the ratio of the optimal Liquid Welfare to the Liquid Welfare obtained in the worst equilibrium. Our main result is that the LPoA for mixed Nash equilibria is bounded by a constant when bidders are additive and items can be divided into sufficiently many discrete parts. Our proofs are robust, and can be extended to achieve similar bounds for simultaneous second price auctions as well as Bayesian Nash equilibria. For pure Nash equilibria, we establish tight bounds on the LPoA for the larger class of fractionally-subadditive valuations. To derive our results, we develop a new technique in which some bidders deviate (surprisingly) toward a non-optimal solution. In particular, this technique does not fit into the smoothness framework.
We study the power of item-pricing as a tool for approximately optimizing social welfare in a combinatorial market. We consider markets with $m$ indivisible items and $n$ buyers. The goal … We study the power of item-pricing as a tool for approximately optimizing social welfare in a combinatorial market. We consider markets with $m$ indivisible items and $n$ buyers. The goal is to set prices to the items so that, when agents purchase their most demanded sets simultaneously, no conflicts arise and the obtained allocation has nearly optimal welfare. For gross substitutes valuations, it is well known that it is possible to achieve optimal welfare in this manner. We ask: can one achieve approximately efficient outcomes for valuations beyond gross substitutes? We show that even for submodular valuations, and even with only two buyers, one cannot guarantee an approximation better than $\Omega(\sqrt{m})$. The same lower bound holds for the class of single-minded buyers as well. Beyond the negative results on welfare approximation, our results have daunting implications on revenue approximation for these valuation classes: in order to obtain good approximation to the collected revenue, one would necessarily need to abandon the common approach of comparing the revenue to the optimal welfare; a fundamentally new approach would be required.
We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in … We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC).We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity.
We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction … We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction design literature as it requires no probabilistic assumptions on buyers' valuations and employs the framework of competitive analysis. Our objective is to optimize the worst-case performance of an auction, measured by the ratio between a given benchmark and revenue generated by the auction.
We present an algorithm that computes approximate pure Nash equilibria in a broad class of constraint satisfaction games that generalize the well-known cut and party affiliation games. Our results improve … We present an algorithm that computes approximate pure Nash equilibria in a broad class of constraint satisfaction games that generalize the well-known cut and party affiliation games. Our results improve previous ones by Bhalgat et al.~(EC 10) in terms of the obtained approximation guarantee. More importantly, our algorithm identifies a polynomially-long sequence of improvement moves from any initial state to an approximate equilibrium in these games. The existence of such short sequences is an interesting structural property which, to the best of our knowledge, was not known before. Our techniques adapt and extend our previous work for congestion games (FOCS 11) but the current analysis is considerably simpler.

Commonly Cited References

We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, … We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget … Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by … The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, How auctions, and cut auctions. For Vertex Cover auctions, the vertices … We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, How auctions, and cut auctions. For Vertex Cover auctions, the vertices are owned by selfish and rational agents, and the auctioneer wants to purchase a vertex cover from them. For k-flow auctions, the edges are owned by the agents, and the auctioneer wants to purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know their costs, and the auctioneer needs to select a feasible set and payments based on bids made by the agents. We present constant-competitive truthful mechanisms for all three set systems. That is, the maximum overpayment of the mechanism is within a constant factor of the maximum overpayment of any truthful mechanism, for every set system in the class. The mechanism for Vertex Cover is based on scaling each bid by a multiplier derived from the dominant eigenvector of a certain matrix. The mechanism for k-flows prunes the graph to be minimally (k + 1)-connected, and then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts contracts the graph until all s-t paths have length exactly 2, and then applies the Vertex Cover mechanism.
We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent's utility for an outcome is given by his value for the outcome minus any payment … We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent's utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This discontinuity in the utility function presents a significant challenge in the design of good mechanisms, and classical mechanisms fail to work in settings with budgets. The goal of this paper is to develop general reductions from budget-constrained Bayesian MD to unconstrained Bayesian MD with small loss in performance. We consider this question in the context of the two most well-studied objectives in mechanism design---social welfare and revenue---and present constant factor approximations in a number of settings. Some of our results extend to settings where budgets are private and agents need to be incentivized to reveal them truthfully.
In the context of auctions for digital goods, an interesting Random Sampling Optimal Price auction (RSOP) has been proposed by Goldberg, Hartline and Wright; this leads to a truthful mechanism. … In the context of auctions for digital goods, an interesting Random Sampling Optimal Price auction (RSOP) has been proposed by Goldberg, Hartline and Wright; this leads to a truthful mechanism. Since random sampling is a popular approach for auctions that aims to maximize the seller's revenue, this method has been analyzed further by Feige, Flaxman, Hartline and Kleinberg, who have shown that it is 15-competitive in the worst case -- which is substantially better than the previously proved bounds but still far from the conjectured competitive ratio of 4. In this paper, we prove that RSOP is indeed 4-competitive for a large class of instances in which the number λ of bidders receiving the item at the optimal uniform price, is at least 6. We also show that it is 4.68 competitive for the small class of remaining instances thus leaving a negligible gap between the lower and upper bound. Furthermore, we develop a robust version of RSOP -- one in which the seller's revenue is, with high probability, not much below its mean -- when the above parameter λ grows large. We employ a mix of probabilistic techniques and dynamic programming to compute these bounds.
Congestion games constitute an important class of games in which computing an exact or even approximate pure Nash equilibrium is in general PLS-complete. We present a surprisingly simple polynomial-time algorithm … Congestion games constitute an important class of games in which computing an exact or even approximate pure Nash equilibrium is in general PLS-complete. We present a surprisingly simple polynomial-time algorithm that computes O(1)-approximate Nash equilibria in these games. In particular, for congestion games with linear latency functions, our algorithm computes (2 +ε)-approximate pure Nash equilibria in time polynomial in the number of players, the number of resources and 1/ε. It also applies to games with polynomial latency functions with constant maximum degree d: there, the approximation guarantee is d <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">o(d)</sup> . The algorithm essentially identifies a polynomially long sequence of best-response moves that lead to an approximate equilibrium; the existence of such short sequences is interesting in itself. These are the first positive algorithmic results for approximate equilibria in non-symmetric congestion games. We strengthen them further by proving that, for congestion games that deviate from our mild assumptions, computing ρ-approximate equilibria is PLS-complete for any polynomial-time computable ρ.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
It is shown that a convex body K tiles Ed by translation if, and only if, K is a centrally symmetric d-polytope with centrally symmetric facets, such that every belt … It is shown that a convex body K tiles Ed by translation if, and only if, K is a centrally symmetric d-polytope with centrally symmetric facets, such that every belt of K (consisting of those of its facets which contain a translate of a given (d – 2)-face) has four or six facets. One consequence of the proof of this result is that, if K tiles Ed by translation, then K admits a face-to-face, and hence a lattice tiling.
We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility … We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility constraint on the probabilities of service of the agents. These environments include position auctions where slots with decreasing click-through rates are auctioned to advertisers. We generalize and characterize the envy-free benchmark from Hartline and Yan [2011] to settings with budgets and characterize the optimal envy-free outcomes for both welfare and revenue. We give prior-free mechanisms that approximate these benchmarks. A building block in our mechanism is a clinching auction for position auction environments. This auction is a generalization of the multi-unit clinching auction of Dobzinski et al. [2008] and a special case of the polyhedral clinching auction of Goel et al. [2012]. For welfare maximization, we show that this clinching auction is a good approximation to the envy-free optimal welfare for position auction environments. For profit maximization, we generalize the random sampling profit extraction auction from Fiat et al. [2002] for digital goods to give a 10.0-approximation to the envy-free optimal revenue in symmetric, downward-closed environments. Even without budgets this revenue maximization question is of interest and we obtain an improved approximation bound of 7.5 (from 30.4 by Ha and Hartline [2012]).
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both … We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both problems. This is close to the Omega(n/log n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log(n) log(r))-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(loglog(n) log(r)) factor) an open question posed to us by Bobby Kleinberg.
We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, … We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, frugality [Archer&Tardos'02] provides a measure to evaluate the "cost of truthfulness", that is, the overpayment of a truthful mechanism relative to the "fair" payment. We propose a uniform scheme for designing frugal truthful mechanisms for general set systems. Our scheme is based on scaling the agents' bids using the eigenvector of a matrix that encodes the interdependencies between the agents. We demonstrate that the r-out-of-k-system mechanism and the \sqrt-mechanism for buying a path in a graph [Karlin et. al'05] can be viewed as instantiations of our scheme. We then apply our scheme to two other classes of set systems, namely, vertex cover systems and k-path systems, in which a customer needs to purchase k edge-disjoint source-sink paths. For both settings, we bound the frugality of our mechanism in terms of the largest eigenvalue of the respective interdependency matrix. We show that our mechanism is optimal for a large subclass of vertex cover systems satisfying a simple local sparsity condition. For k-path systems, while our mechanism is within a factor of k + 1 from optimal, we show that it is, in fact, optimal, when one uses a modified definition of frugality proposed in [Elkind et al.'07]. Our lower bound argument combines spectral techniques and Young's inequality, and is applicable to all set systems. As both r-out-of-k systems and single path systems can be viewed as special cases of k-path systems, our result improves the lower bounds of [Karlin et al.'05] and answers several open questions proposed in that paper.
Huang et al. (STOC 2018) introduced the fully online matching problem, a generalization of the classic online bipartite matching problem in that it allows all vertices to arrive online and … Huang et al. (STOC 2018) introduced the fully online matching problem, a generalization of the classic online bipartite matching problem in that it allows all vertices to arrive online and considers general graphs. They showed that the ranking algorithm by Karp et al. (STOC 1990) is strictly better than 0.5-competitive and the problem is strictly harder than the online bipartite matching problem in that no algorithms can be (1 --- 1/e)-competitive.This paper pins down two tight competitive ratios of classic algorithms for the fully online matching problem. For the fractional version of the problem, we show that a natural instantiation of the water-filling algorithm is [MATH HERE]-competitive, together with a matching hardness result. Interestingly, our hardness result applies to arbitrary algorithms in the edge-arrival models of the online matching problem, improving the state-of-art [MATH HERE] upper bound. For integral algorithms, we show a tight competitive ratio of ≈ 0.567 for the ranking algorithm on bipartite graphs, matching a hardness result by Huang et al. (STOC 2018).
Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets … Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is which valuation domains admit truthful budget feasible mechanisms with 'small' approximations (compared to the social optimum)? Singer [35] showed that additive and submodular functions have a constant approximation mechanism. Recently, Dobzinski, Papadimitriou, and Singer [20] gave an O(log2n) approximation mechanism for subadditive functions; further, they remarked that: A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive In this paper, we address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis, are two standard approaches from computer science and economics, respectively. - For the prior-free framework, we use a linear program (LP) that describes the fractional cover of the valuation function; the LP is also connected to the concept of approximate core in cooperative game theory. We provide a mechanism for subadditive functions whose approximation is O(I), via the worst case integrality gap I of this LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, as well as for valuations having a constant integrality gap. XOS valuations are an important class of functions and lie between the submodular and the subadditive classes of valuations. We further give another polynomial time O(log n/(log log n)) sub-logarithmic approximation mechanism for subadditive functions. Both of our mechanisms improve the best known approximation ratio O(log2 n). - For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, Nick Gravin, and Pinyan Lupp.685 - 699Chapter DOI:https://doi.org/10.1137/1.9781611973082.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group. Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined … Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items … The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [33] and Feldman et al. [17] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1 – 1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [45] and Esfandiari et al. [15] who worked in the special cases where we can fully control the arrival order or when there is only a single item.Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, … The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the l-out-of- k setting, where the decision maker can select up to k elements immediately and irrevocably, but her performance is measured by the top l elements in the selected set. Equivalently, the decision makes can hold up to l elements at any given point in time, but can make up to k-l returns as new elements arrive. We give upper and lower bounds on the competitive ratio of l-out-of- k prophet and secretary scenarios. For l-out-of- k prophet scenarios we provide a single-sample algorithm with competitive ratio 1-l· e-Θ((k-l)2/k) . The algorithm is a single-threshold algorithm, which sets a threshold that equals the (l+k/2)th highest sample, and accepts all values exceeding this threshold, up to reaching capacity k . On the other hand, we show that this result is tight if the number of possible returns is linear in l (i.e., k-l =Θ(l)). In particular, we show that no single-sample algorithm obtains a competitive ratio better than 1 - 2-(2k+1)/k+1 . We also present a deterministic single-threshold algorithm for the 1-out-of- k prophet setting which obtains a competitive ratio of 1-3/2 · e-s/k 6, knowing only the distribution of the maximum value. This result improves the result of [Assaf & Samuel-Cahn, J. of App. Prob., 2000].
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a … We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1−ϵ fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor ϵ approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast ϵ approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain. Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1−ϵ fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary’s distribution, then we design an algorithm that achieves a 1−ϵ fraction of the weighted average of the optimal profit of each distribution the adversary picks. In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+ϵ) the mixed packing-covering problem with O (γ m log ( n /δ)/ϵ 2 ) oracle calls where the constraint matrix of this LP has dimension n × m , the success probability of the algorithm is 1−δ, and γ quantifies how significant a single request is when compared to the sum total of all requests. We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and … A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying these properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multiunit auctions with decreasing marginal utilities in the presence of budget constraints.
We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. … We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors' arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/e-competitive in our model even for bipartite graphs.
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds … We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [14] shows that randomness offers no benefit - the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al.[6] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call the common base value model) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.
Optimal mechanisms have been provided in quite general multi-item settings [Cai et al. 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the … Optimal mechanisms have been provided in quite general multi-item settings [Cai et al. 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the support along with its associated probability. In the implicit setting, e.g. when the bidders have additive valuations with independent and/or continuous values for the items, these results do not apply, and it was recently shown that exact revenue optimization is intractable, even when there is only one bidder [Daskalakis et al. 2013]. Even for item distributions with special structure, optimal mechanisms have been surprisingly rare [Manelli and Vincent 2006] and the problem is challenging even in the two-item case [Hart and Nisan 2012]. In this paper, we provide a framework for designing optimal mechanisms using optimal transport theory and duality theory. We instantiate our framework to obtain conditions under which only pricing the grand bundle is optimal in multi-item settings (complementing the work of [Manelli and Vincent 2006]), as well as to characterize optimal two-item mechanisms. We use our results to derive closed-form descriptions of the optimal mechanism in several two-item settings, exhibiting also a setting where a continuum of lotteries is necessary for revenue optimization but a closed-form representation of the mechanism can still be found efficiently using our framework.
We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly … We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that - unlike in the single item case - the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.
We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary … We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone sub modular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness.
We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; … We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; however, the mechanism may additionally choose to bundle unconstrained attributes such as payments or add-ons with the service. An agent's preference over service and attributes is given by her private type and may be multi-dimensional and non-linear. A single-agent problem is to optimizes a menu to offer an agent subject to constraints on the probabilities with which each of the agent's types is served. We give computationally tractable reductions from multi-agent auction problems to these single-agent problems. Our discussion focuses on maximizing revenue, but our results can be applied to other objectives (e.g., welfare).
O. H. Keller conjectured in 1930 that in any tiling of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>n</mml:mi> … O. H. Keller conjectured in 1930 that in any tiling of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>n</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{\mathbb {R}^n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> by unit <italic>n</italic>-cubes there exist two of them having a complete facet in common. O. Perron proved this conjecture for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n less-than-or-equal-to 6"> <mml:semantics> <mml:mrow> <mml:mi>n</mml:mi> <mml:mo>≤<!-- ≤ --></mml:mo> <mml:mn>6</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">n \leq 6</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. We show that for all <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n greater-than-or-equal-to 10"> <mml:semantics> <mml:mrow> <mml:mi>n</mml:mi> <mml:mo>≥<!-- ≥ --></mml:mo> <mml:mn>10</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">n \geq 10</mml:annotation> </mml:semantics> </mml:math> </inline-formula> there exists a tiling of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>n</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{\mathbb {R}^n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> by unit <italic>n</italic>-cubes such that no two <italic>n</italic>-cubes have a complete facet in common.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team … In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team and pay as little as possible. Examples of this setting include shortest-path auctions and vertex-cover auctions. Recently, Karlin, Kempe and Tamir introduced a new definition of for this problem. Informally, the frugality ratio is the of the total payment of a mechanism to a desired payment bound. The captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction. In this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its ratio. We show that the solution quality is with a constant factor of optimal and the is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties. Moreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio. Also, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule. We study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions. We use these new definitions in the proof of our main result for vertex-cover auctions via a boot-strapping technique, which may be of independent interest.
Abstract This paper presents in a form convenient for applications a number of bounds for distributions with monotone hazard rate. These bounds, together with their proofs and various generalizations have … Abstract This paper presents in a form convenient for applications a number of bounds for distributions with monotone hazard rate. These bounds, together with their proofs and various generalizations have been obtained by Barlow and Marshall (1964). However, many of them can be characterized only through solutions of transcendental equations, and machine calculation has been necessary to make them accessible. Section 2 gives a listing of the results, and those without explicit forms are tabulated. Section 3 lists various related bounds which are of interest for purposes of comparison, and for use when the hypothesis of monotone hazard rate is not satisfied. Applications of the bounds are discussed in Section 4, together with some numerical examples.
We consider revenue-optimal mechanism design for the case with one buyer and two items. The buyer's valuations towards the two items are independent and additive. In this setting, optimal mechanism … We consider revenue-optimal mechanism design for the case with one buyer and two items. The buyer's valuations towards the two items are independent and additive. In this setting, optimal mechanism is unknown for general valuation distributions. We obtain two categories of structural results that shed light on the optimal mechanisms. These results can be summarized into one conclusion: under certain conditions, the optimal mechanisms have simple menus.
We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents … We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents and a budget, while the agents have private costs for being hired. We consider both additive and submodular value functions of the principal. We show that there are simple, practical, ex post budget balanced posted pricing mechanisms that approximate the value obtained by the Bayesian optimal mechanism that is budget balanced only in expectation. A main motivating application for this work is crowdsourcing, e.g., on Mechanical Turk, where workers are drawn from a large population and posted pricing is standard. Our analysis methods relate to contention resolution schemes in submodular optimization of Vondràk et al. and the correlation gap analysis of Yan.
A combinatorial form of Gram’s relation for convex polytopes can be adapted for use in computing polytope volume. We present an algorithm for volume computation based on this observation. This … A combinatorial form of Gram’s relation for convex polytopes can be adapted for use in computing polytope volume. We present an algorithm for volume computation based on this observation. This algorithm is useful in finding the volume of a polytope given as the solution set of a system of linear inequalities, $P = \{ x \in {\mathbb {R}^n}:Ax \leq b\}$ . As an illustration we compute a formula for the volume of a projective image of the n-cube. From this formula we deduce that, when A and b have rational entries (so that the volume of P is also a rational number), the number of binary digits in the denominator of the volume cannot be bounded by a polynomial in the total number of digits in the numerators and denominators of entries of A and b . This settles a question posed by Dyer and Frieze.