Author Description

Login to generate an author description

Ask a Question About This Mathematician

Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it … Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1/e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with … How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with a price sequentially and the buyer can either accept or reject the mechanism's offer. Despite the widespread use of the SPP mechanism, the problem of optimizing prices in this mechanism has not been fully addressed. In a paper entitled, “Improved Revenue Bounds for Posted-Price and Second-Price Mechanisms,” H. Beyhaghi, N. Golrezaei, R. Paes Leme, M. Pal, and B. Sivan construct SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal mechanism and the other that posts a uniform price (same price for every buyer). Their simple pricing rules can be easily generalized to the setting with multiple units and yield the first improvement over long-established approximation factors.
We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely … We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely used selling mechanisms: the second price auction with \emph{eager} personalized reserve prices and the sequential posted price mechanism. Using a new approach, we improve the best-known performance guarantees for these mechanisms. We show that for every value of the number of buyers $n$, the eager second price (ESP) auction and sequential posted price mechanisms respectively earn at least $0.6620$ and $0.6543$ fractions of the optimal revenue. We also provide improved performance guarantees for these mechanisms when the number of buyers is small, which is the more relevant regime for many applications of interest. This in particular implies an improved bound of $0.6543$ for free-order prophet inequalities. Motivated by our improved revenue bounds, we further study the problem of optimizing reserve prices in the ESP auctions when the sorted order of personalized reserve prices among bidders is exogenous. We show that this problem can be solved polynomially. In addition, by analyzing a real auction dataset from Google's advertising exchange, we demonstrate the effectiveness of order-based pricing.
Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying … Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying inventory, and they too seek automatic, online mechanisms for pricing and allocating such reservations. In this paper, we present and study a simple model for auctioning such ad slots in advance. Bidders arrive sequentially and report which slots they are interested in. The seller must decide immediately whether or not to grant a reservation. Our model allows a seller to accept reservations, but possibly cancel the allocations later and pay the bidder a cancellation compensation (bump payment). Our main result is an online mechanism to derive prices and bump payments that is efficient to implement. This mechanism has many desirable properties. It is individually rational; winners have an incentive to be honest and bidding one's true value dominates any lower bid. Our mechanism's efficiency is within a constant fraction of the a posteriori optimally efficient solution. Its revenue is within a constant fraction of the a posteriori revenue of the Vickrey-Clarke-Groves mechanism. Our results make no assumptions about the order of arrival of bids or the value distribution of bidders and still hold if the items for sale are elements of a matroid, a more general setting than slot allocation.
We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well … We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well as a maximum cost per click, and may not be assigned to more than one slot for a particular search. We give a truthful mechanism under the utility model where bidders try to maximize their clicks, subject to their personal constraints. In addition, we show that the revenue-maximizing mechanism is not truthful, but has a Nash equilibrium whose outcome is identical to our mechanism. As far as we can tell, this is the first treatment of sponsored search that directly incorporates both multiple slots and budget constraints into an analysis of incentives. Our mechanism employs a descending-price auction that maintains a solution to a certain machine scheduling problem whose job lengths depend on the price, and hence is variable over the auction. The price stops when the set of bidders that can afford that price pack exactly into a block of ad slots, at which point the mechanism allocates that block and continues on the remaining slots. To prove our result on the equilibrium of the revenue-maximizing mechanism, we first show that a greedy algorithm suffices to solve the revenue-maximizing linear program; we then use this insight to prove that bidders allocated in the same block of our mechanism have no incentive to deviate from bidding the fixed price of that block.
We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of … We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of variables have a tractable structure that we refer to as (ε, δ)-smallness. Using this technique, we make progress on several outstanding problems in the area: We show that, even in the case of non-identical distributions, it is possible to achieve (arbitrarily close to) the optimal approximation ratio of β ~0.745 when the items arrive in a random order (this version is commonly known as prophet secretary) as long as we are allowed to remove a small constant number of distributions. We show that forfrequent instances (where each distribution reoccurs some number of times) and random arrival order, it is possible to achieve the optimal approximation ratio of β (improving over the previous best-known bound of 0.738). We give a new, simpler proof of Kertz's optimal approximation guarantee of β ~0.745 for prophet inequalities with i.i.d. distributions. The proof is primal-dual and simultaneously produces upper and lower bounds. Using this decomposition in combination with a novel convex programming formulation, we construct the first (in parallel work with[1]) an Efficient PTAS (EPTAS) for the Optimal Ordering problem.
In this paper we initiate the study of optimization of bandit type problems in scenarios where the feedback of a play is not immediately known. This arises naturally in allocation … In this paper we initiate the study of optimization of bandit type problems in scenarios where the feedback of a play is not immediately known. This arises naturally in allocation problems which have been studied extensively in the literature, albeit in the absence of delays in the feedback. We study this problem in the Bayesian setting. In presence of delays, no solution with provable guarantees is known to exist with sub-exponential running time. We show that bandit problems with delayed feedback that arise in allocation settings can be forced to have significant structure, with a slight loss in optimality. This structure gives us the ability to reason about the relationship of single arm policies to the entangled optimum policy, and eventually leads to a O(1) approximation for a significantly general class of priors. The structural insights we develop are of key interest and carry over to the setting where the feedback of an action is available instantaneously, and we improve all previous results in this setting as well.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
We examine several online matching problems, with applications to Internet advertising reservation systems. Consider an edge-weighted bipartite graph G, with partite sets L, R. We develop an 8-competitive algorithm for … We examine several online matching problems, with applications to Internet advertising reservation systems. Consider an edge-weighted bipartite graph G, with partite sets L, R. We develop an 8-competitive algorithm for the following secretary problem: Initially given R, and the size of L, the algorithm receives the vertices of L sequentially, in a random order. When a vertex l \in L is seen, all edges incident to l are revealed, together with their weights. The algorithm must immediately either match l to an available vertex of R, or decide that l will remain unmatched. Dimitrov and Plaxton show a 16-competitive algorithm for the transversal matroid secretary problem, which is the special case with weights on vertices, not edges. (Equivalently, one may assume that for each l \in L, the weights on all edges incident to l are identical.) We use a similar algorithm, but simplify and improve the analysis to obtain a better competitive ratio for the more general problem. Perhaps of more interest is the fact that our analysis is easily extended to obtain competitive algorithms for similar problems, such as to find disjoint sets of edges in hypergraphs where edges arrive online. We also introduce secretary problems with adversarially chosen groups. Finally, we give a 2e-competitive algorithm for the secretary problem on graphic matroids, where, with edges appearing online, the goal is to find a maximum-weight acyclic subgraph of a given graph.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad … In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice. A rich body of work on bipartite matching markets builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. We apply insights from this line of research into the structure of stable outcomes and their incentive properties to advertising auctions. We model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) are interpreted as simply computing a \emph{bidder-optimal stable matching} in this model, for a suitably defined set of bidder preferences. In our model, the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences.
Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. Currently, the most popular auction for … Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. Currently, the most popular auction for sponsored search is the "Generalized Second Price" (GSP) auction in which advertisers are assigned to slots in the decreasing order of their "score," which is defined as the product of their bid and click-through rate. In the past few years, there has been significant research on the game-theoretic issues that arise in an advertiser's interaction with the mechanism as well as possible redesigns of the mechanism, but this ranking order has remained standard. From a search engine's perspective, the fundamental question is: what is the best assignment of advertisers to slots? Here "best" could mean "maximizing user satisfaction," "most efficient," "revenue-maximizing," "simplest to interact with," or a combination of these. To answer this question we need to understand the behavior of a search engine user when she sees the displayed ads, since that defines the commodity the advertisers are bidding on, and its value. Most prior work has assumed that the probability of a user clicking on an ad is independent of the other ads shown on the page. We propose a simple Markovian user model that does not make this assumption. We then present an algorithm to determine the most efficient assignment under this model, which turns out to be different than that of GSP. A truthful auction then follows from an application of the Vickrey-Clarke-Groves (VCG) mechanism. Further, we show that our assignment has many of the desirable properties of GSP that makes bidding intuitive. At the technical core of our result are a number of insights about the structure of the optimal assignment.
Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been a lot of attention on the auction process and its game-theoretic aspects, … Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been a lot of attention on the auction process and its game-theoretic aspects, our focus is on the advertisers. In particular, the advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well. More precisely, this strategy gets at least 1-1/e fraction of the maximum clicks possible. Such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.
Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order … Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard.
We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering … We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering two simple pricing schemes: one that imitates Myerson's mechanism via taxation principle, and the other that posts a uniform price. We design a factor revealing LP that crisply captures the approximation factor of this SPM. In the H-units setting, our SPM yields the first improvement in approximation over the correlation-gap-based approximation factor from over eight years ago (Yan 2011). In the position auction setting, our SPM yields the first higher-than 1-1/e approximation factor. In the 1-unit setting, our SPM yields better than the current best known approximation factor for small values of the number of buyers n. Our results on SPMs immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the recent surge of results that show an improvement over 1-1/e for SPMs, ours is the only technique that generalizes beyond the 1-unit setting, and yields the first improvement over long established approximation factors in several settings.
Consider a gambler who observes the realizations of $n$ independent, non-negative, distribution-labeled random variables arriving in a uniform random order and can stop the sequence at any time to obtain … Consider a gambler who observes the realizations of $n$ independent, non-negative, distribution-labeled random variables arriving in a uniform random order and can stop the sequence at any time to obtain a reward of the most recent observation. In 2017, Correa et al. showed that when all distributions are identical, it is possible to design a stopping time that achieves a $\beta \approx 0.745$ fraction of the maximum value (the ``prophet'' benchmark), matching an upper bound of Hill and Kertz. In 2019, Correa et al. showed that when the distributions differ, it is no longer possible to achieve this bound: they prove an upper bound of $\sqrt{3} - 1 < 0.74$ on the best approximation ratio. We show that it is possible to asymptotically achieve the $\beta \approx 0.745$ bound even in the case of non-identical distributions, as long as we are allowed to remove a small constant number of distributions. Formally, we show that for any $\varepsilon$, there exists a constant $C_{\varepsilon} = \mathrm{poly}(1/\varepsilon)$ (independent of $n$) such that after removing $C_{\varepsilon}$ distributions of our choice, we can achieve a $(\beta - \varepsilon)$-approximation to the resulting maximum. We additionally show it is possible to asymptotically achieve an exact $\beta$ approximation ratio for several natural classes of problem instances, including small prophets (where each distribution is concentrated near zero) and frequent prophets (where each distribution occurs at least some number of times).
We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of … We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of variables have a tractable structure that we refer to as $(\varepsilon, \delta)$-smallness. Using this technique, we make progress on several outstanding problems in the area: - We show that, even in the case of non-identical distributions, it is possible to achieve (arbitrarily close to) the optimal approximation ratio of $\beta \approx 0.745$ as long as we are allowed to remove a small constant number of distributions. - We show that for frequent instances of prophet inequalities (where each distribution reoccurs some number of times), it is possible to achieve the optimal approximation ratio of $\beta$ (improving over the previous best-known bound of $0.738$). - We give a new, simpler proof of Kertz's optimal approximation guarantee of $\beta \approx 0.745$ for prophet inequalities with i.i.d. distributions. The proof is primal-dual and simultaneously produces upper and lower bounds. - Using this decomposition in combination with a novel convex programming formulation, we construct the first Efficient PTAS for the Optimal Ordering problem.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal/Myersonian mechanism via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar et al. 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan (2011) and Chakraborty et al. (2010)). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than 1 − 1/e approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
DNMT3A mutations are prevalent in haematologic malignancies. In our mouse model the murine homologue (R878H) of the human 'hotspot' R882H mutation is introduced into the mouse Dnmt3a locus. This results … DNMT3A mutations are prevalent in haematologic malignancies. In our mouse model the murine homologue (R878H) of the human 'hotspot' R882H mutation is introduced into the mouse Dnmt3a locus. This results in globally reduced DNA methylation in all tissues. Mice with heterozygous R878H DNMT3A mutations develop γ-radiation induced thymic lymphoma more rapidly than control mice, suggesting a vulnerability to stress stimuli in Dnmt3aR878H/+ cells. In competitive transplantations, Dnmt3aR878H/+ Lin-Sca-1+Kit+ (LSK) haematopoietic stem/progenitor cells (HSPCs) have a competitive advantage over WT HSPCs, indicating a self-renewal phenotype at the expense of differentiation. RNA sequencing of Dnmt3aR878H/+ LSKs exposed to low dose γ-radiation shows downregulation of the p53 pathway compared to γ-irradiated WT LSKs. Accordingly, reduced PUMA expression is observed by flow cytometry in the bone marrow of γ-irradiated Dnmt3aR878H/+ mice due to impaired p53 signalling. These findings provide new insights into how DNMT3A mutations cause subtle changes in the transcriptome of LSK cells which contribute to their increased self-renewal and propensity for malignant transformation.
DNMT3A mutations are prevalent in haematologic malignancies. In our mouse model the murine homologue (R878H) of the human 'hotspot' R882H mutation is introduced into the mouse Dnmt3a locus. This results … DNMT3A mutations are prevalent in haematologic malignancies. In our mouse model the murine homologue (R878H) of the human 'hotspot' R882H mutation is introduced into the mouse Dnmt3a locus. This results in globally reduced DNA methylation in all tissues. Mice with heterozygous R878H DNMT3A mutations develop γ-radiation induced thymic lymphoma more rapidly than control mice, suggesting a vulnerability to stress stimuli in Dnmt3aR878H/+ cells. In competitive transplantations, Dnmt3aR878H/+ Lin-Sca-1+Kit+ (LSK) haematopoietic stem/progenitor cells (HSPCs) have a competitive advantage over WT HSPCs, indicating a self-renewal phenotype at the expense of differentiation. RNA sequencing of Dnmt3aR878H/+ LSKs exposed to low dose γ-radiation shows downregulation of the p53 pathway compared to γ-irradiated WT LSKs. Accordingly, reduced PUMA expression is observed by flow cytometry in the bone marrow of γ-irradiated Dnmt3aR878H/+ mice due to impaired p53 signalling. These findings provide new insights into how DNMT3A mutations cause subtle changes in the transcriptome of LSK cells which contribute to their increased self-renewal and propensity for malignant transformation.
How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with … How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with a price sequentially and the buyer can either accept or reject the mechanism's offer. Despite the widespread use of the SPP mechanism, the problem of optimizing prices in this mechanism has not been fully addressed. In a paper entitled, “Improved Revenue Bounds for Posted-Price and Second-Price Mechanisms,” H. Beyhaghi, N. Golrezaei, R. Paes Leme, M. Pal, and B. Sivan construct SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal mechanism and the other that posts a uniform price (same price for every buyer). Their simple pricing rules can be easily generalized to the setting with multiple units and yield the first improvement over long-established approximation factors.
We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of … We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of variables have a tractable structure that we refer to as (ε, δ)-smallness. Using this technique, we make progress on several outstanding problems in the area: We show that, even in the case of non-identical distributions, it is possible to achieve (arbitrarily close to) the optimal approximation ratio of β ~0.745 when the items arrive in a random order (this version is commonly known as prophet secretary) as long as we are allowed to remove a small constant number of distributions. We show that forfrequent instances (where each distribution reoccurs some number of times) and random arrival order, it is possible to achieve the optimal approximation ratio of β (improving over the previous best-known bound of 0.738). We give a new, simpler proof of Kertz's optimal approximation guarantee of β ~0.745 for prophet inequalities with i.i.d. distributions. The proof is primal-dual and simultaneously produces upper and lower bounds. Using this decomposition in combination with a novel convex programming formulation, we construct the first (in parallel work with[1]) an Efficient PTAS (EPTAS) for the Optimal Ordering problem.
Consider a gambler who observes the realizations of $n$ independent, non-negative, distribution-labeled random variables arriving in a uniform random order and can stop the sequence at any time to obtain … Consider a gambler who observes the realizations of $n$ independent, non-negative, distribution-labeled random variables arriving in a uniform random order and can stop the sequence at any time to obtain a reward of the most recent observation. In 2017, Correa et al. showed that when all distributions are identical, it is possible to design a stopping time that achieves a $\beta \approx 0.745$ fraction of the maximum value (the ``prophet'' benchmark), matching an upper bound of Hill and Kertz. In 2019, Correa et al. showed that when the distributions differ, it is no longer possible to achieve this bound: they prove an upper bound of $\sqrt{3} - 1 < 0.74$ on the best approximation ratio. We show that it is possible to asymptotically achieve the $\beta \approx 0.745$ bound even in the case of non-identical distributions, as long as we are allowed to remove a small constant number of distributions. Formally, we show that for any $\varepsilon$, there exists a constant $C_{\varepsilon} = \mathrm{poly}(1/\varepsilon)$ (independent of $n$) such that after removing $C_{\varepsilon}$ distributions of our choice, we can achieve a $(\beta - \varepsilon)$-approximation to the resulting maximum. We additionally show it is possible to asymptotically achieve an exact $\beta$ approximation ratio for several natural classes of problem instances, including small prophets (where each distribution is concentrated near zero) and frequent prophets (where each distribution occurs at least some number of times).
We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of … We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of variables have a tractable structure that we refer to as $(\varepsilon, \delta)$-smallness. Using this technique, we make progress on several outstanding problems in the area: - We show that, even in the case of non-identical distributions, it is possible to achieve (arbitrarily close to) the optimal approximation ratio of $\beta \approx 0.745$ as long as we are allowed to remove a small constant number of distributions. - We show that for frequent instances of prophet inequalities (where each distribution reoccurs some number of times), it is possible to achieve the optimal approximation ratio of $\beta$ (improving over the previous best-known bound of $0.738$). - We give a new, simpler proof of Kertz's optimal approximation guarantee of $\beta \approx 0.745$ for prophet inequalities with i.i.d. distributions. The proof is primal-dual and simultaneously produces upper and lower bounds. - Using this decomposition in combination with a novel convex programming formulation, we construct the first Efficient PTAS for the Optimal Ordering problem.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal/Myersonian mechanism via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar et al. 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan (2011) and Chakraborty et al. (2010)). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than 1 − 1/e approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting
We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely … We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely used selling mechanisms: the second price auction with \emph{eager} personalized reserve prices and the sequential posted price mechanism. Using a new approach, we improve the best-known performance guarantees for these mechanisms. We show that for every value of the number of buyers $n$, the eager second price (ESP) auction and sequential posted price mechanisms respectively earn at least $0.6620$ and $0.6543$ fractions of the optimal revenue. We also provide improved performance guarantees for these mechanisms when the number of buyers is small, which is the more relevant regime for many applications of interest. This in particular implies an improved bound of $0.6543$ for free-order prophet inequalities. Motivated by our improved revenue bounds, we further study the problem of optimizing reserve prices in the ESP auctions when the sorted order of personalized reserve prices among bidders is exogenous. We show that this problem can be solved polynomially. In addition, by analyzing a real auction dataset from Google's advertising exchange, we demonstrate the effectiveness of order-based pricing.
We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering … We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering two simple pricing schemes: one that imitates Myerson's mechanism via taxation principle, and the other that posts a uniform price. We design a factor revealing LP that crisply captures the approximation factor of this SPM. In the H-units setting, our SPM yields the first improvement in approximation over the correlation-gap-based approximation factor from over eight years ago (Yan 2011). In the position auction setting, our SPM yields the first higher-than 1-1/e approximation factor. In the 1-unit setting, our SPM yields better than the current best known approximation factor for small values of the number of buyers n. Our results on SPMs immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the recent surge of results that show an improvement over 1-1/e for SPMs, ours is the only technique that generalizes beyond the 1-unit setting, and yields the first improvement over long established approximation factors in several settings.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
In this paper we initiate the study of optimization of bandit type problems in scenarios where the feedback of a play is not immediately known. This arises naturally in allocation … In this paper we initiate the study of optimization of bandit type problems in scenarios where the feedback of a play is not immediately known. This arises naturally in allocation problems which have been studied extensively in the literature, albeit in the absence of delays in the feedback. We study this problem in the Bayesian setting. In presence of delays, no solution with provable guarantees is known to exist with sub-exponential running time. We show that bandit problems with delayed feedback that arise in allocation settings can be forced to have significant structure, with a slight loss in optimality. This structure gives us the ability to reason about the relationship of single arm policies to the entangled optimum policy, and eventually leads to a O(1) approximation for a significantly general class of priors. The structural insights we develop are of key interest and carry over to the setting where the feedback of an action is available instantaneously, and we improve all previous results in this setting as well.
Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying … Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying inventory, and they too seek automatic, online mechanisms for pricing and allocating such reservations. In this paper, we present and study a simple model for auctioning such ad slots in advance. Bidders arrive sequentially and report which slots they are interested in. The seller must decide immediately whether or not to grant a reservation. Our model allows a seller to accept reservations, but possibly cancel the allocations later and pay the bidder a cancellation compensation (bump payment). Our main result is an online mechanism to derive prices and bump payments that is efficient to implement. This mechanism has many desirable properties. It is individually rational; winners have an incentive to be honest and bidding one's true value dominates any lower bid. Our mechanism's efficiency is within a constant fraction of the a posteriori optimally efficient solution. Its revenue is within a constant fraction of the a posteriori revenue of the Vickrey-Clarke-Groves mechanism. Our results make no assumptions about the order of arrival of bids or the value distribution of bidders and still hold if the items for sale are elements of a matroid, a more general setting than slot allocation.
Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. Currently, the most popular auction for … Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. Currently, the most popular auction for sponsored search is the "Generalized Second Price" (GSP) auction in which advertisers are assigned to slots in the decreasing order of their "score," which is defined as the product of their bid and click-through rate. In the past few years, there has been significant research on the game-theoretic issues that arise in an advertiser's interaction with the mechanism as well as possible redesigns of the mechanism, but this ranking order has remained standard. From a search engine's perspective, the fundamental question is: what is the best assignment of advertisers to slots? Here "best" could mean "maximizing user satisfaction," "most efficient," "revenue-maximizing," "simplest to interact with," or a combination of these. To answer this question we need to understand the behavior of a search engine user when she sees the displayed ads, since that defines the commodity the advertisers are bidding on, and its value. Most prior work has assumed that the probability of a user clicking on an ad is independent of the other ads shown on the page. We propose a simple Markovian user model that does not make this assumption. We then present an algorithm to determine the most efficient assignment under this model, which turns out to be different than that of GSP. A truthful auction then follows from an application of the Vickrey-Clarke-Groves (VCG) mechanism. Further, we show that our assignment has many of the desirable properties of GSP that makes bidding intuitive. At the technical core of our result are a number of insights about the structure of the optimal assignment.
We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well … We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well as a maximum cost per click, and may not be assigned to more than one slot for a particular search. We give a truthful mechanism under the utility model where bidders try to maximize their clicks, subject to their personal constraints. In addition, we show that the revenue-maximizing mechanism is not truthful, but has a Nash equilibrium whose outcome is identical to our mechanism. As far as we can tell, this is the first treatment of sponsored search that directly incorporates both multiple slots and budget constraints into an analysis of incentives. Our mechanism employs a descending-price auction that maintains a solution to a certain machine scheduling problem whose job lengths depend on the price, and hence is variable over the auction. The price stops when the set of bidders that can afford that price pack exactly into a block of ad slots, at which point the mechanism allocates that block and continues on the remaining slots. To prove our result on the equilibrium of the revenue-maximizing mechanism, we first show that a greedy algorithm suffices to solve the revenue-maximizing linear program; we then use this insight to prove that bidders allocated in the same block of our mechanism have no incentive to deviate from bidding the fixed price of that block.
In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad … In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice. A rich body of work on bipartite matching markets builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. We apply insights from this line of research into the structure of stable outcomes and their incentive properties to advertising auctions. We model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) are interpreted as simply computing a \emph{bidder-optimal stable matching} in this model, for a suitably defined set of bidder preferences. In our model, the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences.
We examine several online matching problems, with applications to Internet advertising reservation systems. Consider an edge-weighted bipartite graph G, with partite sets L, R. We develop an 8-competitive algorithm for … We examine several online matching problems, with applications to Internet advertising reservation systems. Consider an edge-weighted bipartite graph G, with partite sets L, R. We develop an 8-competitive algorithm for the following secretary problem: Initially given R, and the size of L, the algorithm receives the vertices of L sequentially, in a random order. When a vertex l \in L is seen, all edges incident to l are revealed, together with their weights. The algorithm must immediately either match l to an available vertex of R, or decide that l will remain unmatched. Dimitrov and Plaxton show a 16-competitive algorithm for the transversal matroid secretary problem, which is the special case with weights on vertices, not edges. (Equivalently, one may assume that for each l \in L, the weights on all edges incident to l are identical.) We use a similar algorithm, but simplify and improve the analysis to obtain a better competitive ratio for the more general problem. Perhaps of more interest is the fact that our analysis is easily extended to obtain competitive algorithms for similar problems, such as to find disjoint sets of edges in hypergraphs where edges arrive online. We also introduce secretary problems with adversarially chosen groups. Finally, we give a 2e-competitive algorithm for the secretary problem on graphic matroids, where, with edges appearing online, the goal is to find a maximum-weight acyclic subgraph of a given graph.
Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it … Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1/e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.
Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order … Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard.
Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been a lot of attention on the auction process and its game-theoretic aspects, … Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been a lot of attention on the auction process and its game-theoretic aspects, our focus is on the advertisers. In particular, the advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well. More precisely, this strategy gets at least 1-1/e fraction of the maximum clicks possible. Such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.
Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it … Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1/e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined … Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their … Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e ≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.
Consider a player who can probe a sequence of n independent random variables X1, . . . , Xn with known distributions. After observing (the realized value of) Xi, the … Consider a player who can probe a sequence of n independent random variables X1, . . . , Xn with known distributions. After observing (the realized value of) Xi, the player needs to decide whether to stop and earn reward Xi, or reject the reward and probe the next variable Xi+1. The goal is to maximize the expected reward at the stopping time. This is an instance of the optimal stopping problem, which is a fundamental problem studied from many different aspects in mathematics, statistics, and computer science, and has found a wide variety of applications in sequential decision making and mechanism design.
In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the … In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O ( m log m ) and O ( n 3 ), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.
This paper provides a general review of adaptive experimental designs which utilize accumulating information for assigning the best treatment to the most patients in clinical trials. The historical development of … This paper provides a general review of adaptive experimental designs which utilize accumulating information for assigning the best treatment to the most patients in clinical trials. The historical development of such methods is traced. Though the statistical literture on adaptive designs has developed rapidly and continues to grow, the methods are almost totally unused in practice. An extensive evaluation of why adaptive designs are rarely used in clinical trials is presented. It is asserted that most published methods have important deficiencies that render them unsuitable for application. Suggestions are offered for reorienting this area of research into directions that are potentially more useful for clinical trials.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see … In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 − 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multi-threshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
This paper reconsiders the usual sequential decision problem of choosing between two simple hypotheses $H_0$ and $H_1$, in terms of iirv when there is a time delay, assumed to have … This paper reconsiders the usual sequential decision problem of choosing between two simple hypotheses $H_0$ and $H_1$, in terms of iirv when there is a time delay, assumed to have a known exponential distribution, in obtaining observations. A basic assumption underlying much of the current analysis, is that results of taking observations are considered as immediately available. In other words, it is assumed that there is no time delay between the decision to take an observation and obtaining the result of the observation. This, of course, can be a tremendous limitation to the applicability of the theory. In actuality, such time lags can be substantial and taking an observation often involves experimentation. One important example, where this time delay often considerably inhibits the use of sequential analysis is medical experimentation. Here, a long time may elapse between the application and the result of a treatment. The theory of sequential analysis is considered, explicitly taking account of time lags. At any point in time the decision maker must decide whether to stop and take action now or to continue and schedule more experiments. If he continues he must also decide how many more experiments to schedule. The problem basically then is to find an optimal procedure incorporating a stopping, scheduling and terminal action rule. There is an interesting interplay among these three; and optimal stopping rules, currently used for some problems, may not be optimal when scheduling factors are considered. The usual losses related to decision errors are specified and linear cost assumptions, with regard to amount of experimentation and time until a final decision, are made. Time until a terminal decision is an important variable. If it is decided to continue observation then scheduling many experiments will result in a small expected waiting time until the next result. However, this advantage must be balanced with the cost of scheduling these experiments. Finally, all the previous must be weighed with the loss of taking immediate action with only the currently available information. Bayes procedures are derived. The information state, at any time, because of the exponential assumption, will be described by $(n, \pi)$ where $\pi$ is the current posterior probability of $H_0$ and $n$ the numbers of results outstanding from tests already scheduled. As indicated in Section 2, when a known exponential time delay distribution is assumed, possible decision changes should be made only when test results are obtained. In Section 3, the usually used Sequential Probability Ratio Test (SPRT) stopping rule is studied. Here there are two values $0 < \pi_1 \leqq \pi_2 < 1$ and SPRT specifies that $(n, \pi)$ is a continuation state as long as $\pi_1 < \pi < \pi_2$. It is shown that there is a bounded function $z(\pi)$ such that the optimal scheduling quantity, for a continuation state $(n, \pi)$, is $y(n, \pi) = \max \lbrack 0, z(\pi) - n\rbrack$. That is, if $n > z(\pi)$ we schedule no experiments. On the other hand, if $n \leqq z(\pi)$ then $z(\pi) -n$ more experiments are scheduled. The functional equation approach of Dynamic Programming is used and provides a computational method for approximating $z(\pi)$. The general case, where the problem is to find an optimal stopping and scheduling rule, is studied in Section 4. Various results about the optimal stopping region, in the $(n, \pi)$ plane, are derived and it is shown that the optimal procedure stops with probability one. The optimal stopping region is a kind of generalized SPRT described by functions $0 < \pi_1(n) \leqq \pi_2(n) < 1$ such that $(n, \pi)$ is a continuation state as long as $\pi_1(n) < \pi < \pi_2(n)$. Also, it is shown that there exist two bounded functions $z_1(\pi) \leqq z_2(\pi) \leqq \mathbf{M} < \infty$ such that if $(n, \pi)$ is a continuation point the optimal scheduling quantity is $y(n, \pi) = z_1(\pi) - n$ if $n \leqq z_1(\pi)$, and $y(n, \pi) = 0$ if $n \geqq z_2(\pi)$. When a continuous approximation to the problem is used, allowing $n$ to take on a continuous range of values, a stronger result is proven. Here, the two functions $z_1(\pi)$ and $z_2(\pi)$ may be taken as equal. The results for optimal scheduling rules have similarities to some problems studied in Inventory theory, see [5].
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see … In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 – 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multithreshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
Inspired by real-time ad exchanges for online display advertising, we consider the problem of inferring a buyer's value distribution for a good when the buyer is repeatedly interacting with a … Inspired by real-time ad exchanges for online display advertising, we consider the problem of inferring a buyer's value distribution for a good when the buyer is repeatedly interacting with a seller through a posted-price mechanism. We model the buyer as a strategic agent, whose goal is to maximize her long-term surplus, and we are interested in mechanisms that maximize the seller's long-term revenue. We define the natural notion of strategic regret --- the lost revenue as measured against a truthful (non-strategic) buyer. We present seller algorithms that are no-(strategic)-regret when the buyer discounts her future surplus --- i.e. the buyer prefers showing advertisements to users sooner rather than later. We also give a lower bound on strategic regret that increases as the buyer's discounting weakens and shows, in particular, that any seller algorithm will suffer linear strategic regret if there is no discounting.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items … The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [33] and Feldman et al. [17] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1 – 1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [45] and Esfandiari et al. [15] who worked in the special cases where we can fully control the arrival order or when there is only a single item.Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
The promise of data science is that if data from a system can be recorded and understood then this understanding can potentially be utilized to improve the system. Behavioral and … The promise of data science is that if data from a system can be recorded and understood then this understanding can potentially be utilized to improve the system. Behavioral and economic data, however, is different from scientific data in that it is subjective to the system. Behavior changes when the system changes, and to predict behavior for any given system change or to optimize over system changes, the behavioral model that generates the data must be inferred from the data. The ease with which this inference can be performed generally also depends on the system. Trivially, a system that ignores behavior does not admit any inference of a behavior generating model that can be used to predict behavior in a system that is responsive to behavior.
Summary Much technical work has been devoted to the construction of schemes for optimizing some measure of the outcome of a clinical trial. The main ideas have been the use … Summary Much technical work has been devoted to the construction of schemes for optimizing some measure of the outcome of a clinical trial. The main ideas have been the use of data-dependent allocation, by which a majority of patients will tend to be assigned to the better treatment, and a decision-theoretic approach to the optimal allocation of treatment for a large 'horizon', including patients treated after the end of the trial. These ideas have rarely been used in practice, and some of the reasons for their neglect are outlined. It is suggested that more collaboration is needed between the proponents of these models and those statisticians concerned directly with the design and analysis of trials.
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and … Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and if $T_n$ is the set of stop rules for $X_1, \cdots, X_n$, then $E(\max\{X_1, \cdots, X_n\}) \leq a_n \sup\{EX_t: t \in T_n\}$, and the bound $a_n$ is best possible. Similar universal constants $0 < b_n < \frac{1}{4}$ are found so that if the $\{X_i\}$ are i.i.d. random variables taking values only in $\lbrack a, b\rbrack$, then $E(\max\{X_1, \cdots, X_n\}) \leq \sup\{EX_t: t \in T_n\} + b_n(b - a)$, where again the bound $b_n$ is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.
Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying … Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying inventory, and they too seek automatic, online mechanisms for pricing and allocating such reservations. In this paper, we present and study a simple model for auctioning such ad slots in advance. Bidders arrive sequentially and report which slots they are interested in. The seller must decide immediately whether or not to grant a reservation. Our model allows a seller to accept reservations, but possibly cancel the allocations later and pay the bidder a cancellation compensation (bump payment). Our main result is an online mechanism to derive prices and bump payments that is efficient to implement. This mechanism has many desirable properties. It is individually rational; winners have an incentive to be honest and bidding one's true value dominates any lower bid. Our mechanism's efficiency is within a constant fraction of the a posteriori optimally efficient solution. Its revenue is within a constant fraction of the a posteriori revenue of the Vickrey-Clarke-Groves mechanism. Our results make no assumptions about the order of arrival of bids or the value distribution of bidders and still hold if the items for sale are elements of a matroid, a more general setting than slot allocation.
Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation … Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves.
Classical learning assumes the learner is given a labeled data sample, from which it learns a model. The field of Active Learning deals with the situation where the learner begins … Classical learning assumes the learner is given a labeled data sample, from which it learns a model. The field of Active Learning deals with the situation where the learner begins not with a training sample, but instead with resources that it can use to obtain information to help identify the optimal model. To better understand this task, this paper presents and analyses the simplified (budgeted) active selection version, which captures the pure exploration aspect of many active learning problems in a clean and simple problem formulation. Here the learner can use a fixed budget of model probes (where each probe evaluates the specified on a random indistinguishable instance) to identify which of a given set of possible models has the highest expected accuracy. Our goal is a policy that sequentially determines which to probe next, based on the information observed so far. We present a formal description of this task, and show that it is NP-hard in general. We then investigate a number of algorithms for this task, including several existing ones (eg, Round-Robin, Interval Estimation, Gittins) as well as some novel ones (e.g., Biased-Robin), describing first their approximation properties and then their empirical performance on various problem instances. We observe empirically that the simple biased-robin algorithm significantly outperforms the other algorithms in the case of identical costs and priors.
We explore the properties of a congestion game in which users of a congested resource anticipate the effect of their actions on the price of the resource. When users are … We explore the properties of a congestion game in which users of a congested resource anticipate the effect of their actions on the price of the resource. When users are sharing a single resource, we establish that the aggregate utility received by the users is at least 3/4 of the maximum possible aggregate utility. We also consider extensions to a network context, where users submit individual payments for each link in the network they may wish to use. In this network model, we again show that the selfish behavior of the users leads to an aggregate utility that is no worse than 3/4 of the maximum possible aggregate utility. We also show that the same analysis extends to a wide class of resource allocation systems where end users simultaneously require multiple scarce resources. These results form part of a growing literature on the “price of anarchy,” i.e., the extent to which selfish behavior affects system efficiency.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is … In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted … Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)
Abstract In testing one hypothesis against another, observations may be obtained sequentially. The special feature of the situation considered in this paper is that after the decision to stop observation … Abstract In testing one hypothesis against another, observations may be obtained sequentially. The special feature of the situation considered in this paper is that after the decision to stop observation is made a fixed number of additional observations are available for the terminal decision. The test is then based on the ratio of the likelihoods (at two simple hypotheses) of all the observations (whatever stopping rule is used). Numerical comparisons are made for various cases when the hypotheses concern the mean of a normal distribution with the variance known; these cases include a fixed-sample-size procedure as well as cases where several numbers of additional observations are available after the Wald sequential probability ratio test is used as a stopping rule.
This paper considers a procedure, based on Anderson [1964], wherein the terminal decision of a sequential probability ratio test (SPRT) may be subsequently altered by a likelihood ratio procedure incorporating … This paper considers a procedure, based on Anderson [1964], wherein the terminal decision of a sequential probability ratio test (SPRT) may be subsequently altered by a likelihood ratio procedure incorporating both the sequential observations and m additional observations. The procedure in this paper is concerned with sequential tests comparing two binomial parameters, and is motivated in the context of matched pair clinical trials. Three cases are considered: m is a known number, m has a binomial distribution, and m has a Poisson distribution. A computational formula is derived to evaluate the probabilities of the two types of error, and a set of charts is obtained to illustrate the application of the procedure.
This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized, method for real-parameter (continuous domain) optimization of non-linear, … This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized, method for real-parameter (continuous domain) optimization of non-linear, non-convex functions. We try to motivate and derive the algorithm from intuitive concepts and from requirements of non-linear, non-convex search in continuous domain.
Random variables, $X_1, \cdots, X_k$ are said to be negatively associated (NA) if for every pair of disjoint subsets $A_1, A_2$ of $\{1, 2, \cdots, k\}, \operatorname{Cov}\lbrack f(X_1, i \in … Random variables, $X_1, \cdots, X_k$ are said to be negatively associated (NA) if for every pair of disjoint subsets $A_1, A_2$ of $\{1, 2, \cdots, k\}, \operatorname{Cov}\lbrack f(X_1, i \in A_1), g(X_j, j \in A_2) \rbrack \leq 0$, for all nondecreasing functions $f, g$. The basic properties of negative association are derived. Especially useful is the property that nondecreasing functions of mutually exclusive subsets of NA random variables are NA. This property is shown not to hold for several other types of negative dependence recently proposed. One consequence is the inequality $P(X_i \leq x_i, i = 1, \cdots, k) \leq \prod^k_1P(X_i \leq x_i)$ for NA random variables $X_1, \cdots, X_k$, and the dual inequality resulting from reversing the inequalities inside the square brackets. In another application it is shown that negatively correlated normal random variables are NA. Other NA distributions are the (a) multinomial, (b) convolution of unlike multinomials, (c) multivariate hypergeometric, (d) Dirichlet, and (e) Dirichlet compound multinomial. Negative association is shown to arise in situations where the probability measure is permutation invariant. Applications of this are considered for sampling without replacement as well as for certain multiple ranking and selection procedures. In a somewhat striking example, NA and positive association representing quite strong opposing types of dependence, are shown to exist side by side in models of categorical data analysis.
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds … We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [14] shows that randomness offers no benefit - the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al.[6] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call the common base value model) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
Article Free Access Share on A threshold of ln n for approximating set cover (preliminary version) Author: Uriel Feige Department of Applied Math and Computer Science, The Weizmann Institute, Rehovot … Article Free Access Share on A threshold of ln n for approximating set cover (preliminary version) Author: Uriel Feige Department of Applied Math and Computer Science, The Weizmann Institute, Rehovot 76100, Israel Department of Applied Math and Computer Science, The Weizmann Institute, Rehovot 76100, IsraelView Profile Authors Info & Claims STOC '96: Proceedings of the twenty-eighth annual ACM symposium on Theory of ComputingJuly 1996Pages 314–318https://doi.org/10.1145/237814.237977Published:01 July 1996Publication History 180citation721DownloadsMetricsTotal Citations180Total Downloads721Last 12 Months69Last 6 weeks5 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach … SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses — the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.
We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked … We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare.
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have … In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders have arbitrary demand and budget constraints (and additive valuations). Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with a discrete type space for each bidder, where her valuations for different items can be correlated. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The … UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.
We develop a framework for obtaining polynomial time approximation schemes (PTAS) for a class of stochastic dynamic programs. Using our framework, we obtain the first PTAS for the following stochastic … We develop a framework for obtaining polynomial time approximation schemes (PTAS) for a class of stochastic dynamic programs. Using our framework, we obtain the first PTAS for the following stochastic combinatorial optimization problems: \probemax: We are given a set of $n$ items, each item $i\in [n]$ has a value $X_i$ which is an independent random variable with a known (discrete) distribution $\pi_i$. We can {\em probe} a subset $P\subseteq [n]$ of items sequentially. Each time after {probing} an item $i$, we observe its value realization, which follows the distribution $\pi_i$. We can {\em adaptively} probe at most $m$ items and each item can be probed at most once. The reward is the maximum among the $m$ realized values. Our goal is to design an adaptive probing policy such that the expected value of the reward is maximized. To the best of our knowledge, the best known approximation ratio is $1-1/e$, due to Asadpour \etal~\cite{asadpour2015maximizing}. We also obtain PTAS for some generalizations and variants of the problem and some other problems.
We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely … We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely used selling mechanisms: the second price auction with \emph{eager} personalized reserve prices and the sequential posted price mechanism. Using a new approach, we improve the best-known performance guarantees for these mechanisms. We show that for every value of the number of buyers $n$, the eager second price (ESP) auction and sequential posted price mechanisms respectively earn at least $0.6620$ and $0.6543$ fractions of the optimal revenue. We also provide improved performance guarantees for these mechanisms when the number of buyers is small, which is the more relevant regime for many applications of interest. This in particular implies an improved bound of $0.6543$ for free-order prophet inequalities. Motivated by our improved revenue bounds, we further study the problem of optimizing reserve prices in the ESP auctions when the sorted order of personalized reserve prices among bidders is exogenous. We show that this problem can be solved polynomially. In addition, by analyzing a real auction dataset from Google's advertising exchange, we demonstrate the effectiveness of order-based pricing.
We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the … We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the submitted bids of n buyers in a set of auctions and the goal is to return personalized reserve prices r that maximize the revenue earned on these auctions by running eager second price auctions with reserve r. We present a novel LP formulation to this problem and a rounding procedure which achieves a (1+2(√2-1)e√2-2)-1≅0.684-approximation. This improves over the 1/2-approximation Algorithm due to Roughgarden and Wang. We show that our analysis is tight for this rounding procedure. We also bound the integrality gap of the LP, which bounds the performance of any algorithm based on this LP.
In the classical contextual bandits problem, in each round $t$, a learner observes some context $c$, chooses some action $a$ to perform, and receives some reward $r_{a,t}(c)$. We consider the … In the classical contextual bandits problem, in each round $t$, a learner observes some context $c$, chooses some action $a$ to perform, and receives some reward $r_{a,t}(c)$. We consider the variant of this problem where in addition to receiving the reward $r_{a,t}(c)$, the learner also learns the values of $r_{a,t}(c')$ for all other contexts $c'$; i.e., the rewards that would have been achieved by performing that action under different contexts. This variant arises in several strategic settings, such as learning how to bid in non-truthful repeated auctions (in this setting the context is the decision maker's private valuation for each auction). We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classical contextual bandits problem achieve $\tilde{O}(\sqrt{CKT})$ regret against all stationary policies, where $C$ is the number of contexts, $K$ the number of actions, and $T$ the number of rounds. We demonstrate algorithms for the contextual bandits problem with cross-learning that remove the dependence on $C$ and achieve regret $\tilde{O}(\sqrt{KT})$ (when contexts are stochastic with known distribution), $\tilde{O}(K^{1/3}T^{2/3})$ (when contexts are stochastic with unknown distribution), and $\tilde{O}(\sqrt{KT})$ (when contexts are adversarial but rewards are stochastic). We simulate our algorithms on real auction data from an ad exchange running first-price auctions (showing that they outperform traditional contextual bandit algorithms).
Community detection is often used to understand the structure of large and complex networks. One of the most popular algorithms for uncovering community structure is the so-called Louvain algorithm. We … Community detection is often used to understand the structure of large and complex networks. One of the most popular algorithms for uncovering community structure is the so-called Louvain algorithm. We show that this algorithm has a major defect that largely went unnoticed until now: the Louvain algorithm may yield arbitrarily badly connected communities. In the worst case, communities may even be disconnected, especially when running the algorithm iteratively. In our experimental analysis, we observe that up to 25% of the communities are badly connected and up to 16% are disconnected. To address this problem, we introduce the Leiden algorithm. We prove that the Leiden algorithm yields communities that are guaranteed to be connected. In addition, we prove that, when the Leiden algorithm is applied iteratively, it converges to a partition in which all subsets of all communities are locally optimally assigned. Furthermore, by relying on a fast local move approach, the Leiden algorithm runs faster than the Louvain algorithm. We demonstrate the performance of the Leiden algorithm for several benchmark and real-world networks. We find that the Leiden algorithm is faster than the Louvain algorithm and uncovers better partitions, in addition to providing explicit guarantees.