Author Description

Login to generate an author description

Ask a Question About This Mathematician

Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher … Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of "committing policies" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a 1-1/e = 0.63... factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with … How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with a price sequentially and the buyer can either accept or reject the mechanism's offer. Despite the widespread use of the SPP mechanism, the problem of optimizing prices in this mechanism has not been fully addressed. In a paper entitled, “Improved Revenue Bounds for Posted-Price and Second-Price Mechanisms,” H. Beyhaghi, N. Golrezaei, R. Paes Leme, M. Pal, and B. Sivan construct SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal mechanism and the other that posts a uniform price (same price for every buyer). Their simple pricing rules can be easily generalized to the setting with multiple units and yield the first improvement over long-established approximation factors.
We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely … We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely used selling mechanisms: the second price auction with \emph{eager} personalized reserve prices and the sequential posted price mechanism. Using a new approach, we improve the best-known performance guarantees for these mechanisms. We show that for every value of the number of buyers $n$, the eager second price (ESP) auction and sequential posted price mechanisms respectively earn at least $0.6620$ and $0.6543$ fractions of the optimal revenue. We also provide improved performance guarantees for these mechanisms when the number of buyers is small, which is the more relevant regime for many applications of interest. This in particular implies an improved bound of $0.6543$ for free-order prophet inequalities. Motivated by our improved revenue bounds, we further study the problem of optimizing reserve prices in the ESP auctions when the sorted order of personalized reserve prices among bidders is exogenous. We show that this problem can be solved polynomially. In addition, by analyzing a real auction dataset from Google's advertising exchange, we demonstrate the effectiveness of order-based pricing.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue … The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders.
We study the outcome of deferred acceptance when prospective medical residents can only apply to a limited set of hospitals. This limitation requires residents to make a strategic choice about … We study the outcome of deferred acceptance when prospective medical residents can only apply to a limited set of hospitals. This limitation requires residents to make a strategic choice about the quality of hospitals they apply to. Through a mix of theoretical and experimental results, we study the effect of this strategic choice on the preferences submitted by participants, as well as on the overall welfare. We find that residents' choices in our model mimic the behavior observed in real systems where individuals apply to a mix of positions consisting mostly of places where they are reasonably likely to get accepted, as well as a few "reach" applications to hospitals of very high quality, and a few "safe" applications to hospitals of lower than their expected level. Surprisingly, the number of such "safe" applications is not monotone in the number of allowed applications. We also find that selfish behavior can hurt social welfare, but the deterioration of overall welfare is very minimal.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
Weitzman (1979) introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of n … Weitzman (1979) introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of n alternatives. Several decades later, Doval (2018) introduced a close version of the problem, where the searcher does not need to incur the inspection cost of an alternative, and can select it uninspected. Unlike the original problem, the optimal solution to the nonobligatory inspection variant is proved to need adaptivity by Doval (2018), and by recent work of Fu Li and Liu (2022), finding the optimal solution is NP-hard.
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current … The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. … Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. The conjecture still remains open despite substantial partial progress, including constant-competitive algorithms for numerous special cases of matroids, and an $O(\log \log \text{rank})$-competitive algorithm in the general case. Many of these algorithms follow principled frameworks. The limits of these frameworks are previously unstudied, and prior work establishes only that a handful of particular algorithms cannot resolve the matroid secretary conjecture. We initiate the study of impossibility results for frameworks to resolve this conjecture. We establish impossibility results for a natural class of greedy algorithms and for randomized partition algorithms, both of which contain known algorithms that resolve special cases.
We consider the problem of helping agents improve by setting short-term goals. Given a set of target skill levels, we assume each agent will try to improve from their initial … We consider the problem of helping agents improve by setting short-term goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach or do nothing if no target level is within reach. We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and fairness objectives are considered where the agents belong to different underlying populations. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve. This is especially challenging when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we show counterexamples in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.
Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher … Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of "committing policies" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a $1-\frac1e = 0.63\ldots$ factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue … The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders. We prove that the competition complexity of $n$ bidders with additive valuations over $m$ independent items is at most $n(\ln(1+m/n)+2)$, and also at most $9\sqrt{nm}$. When $n \leq m$, the first bound is optimal up to constant factors, even when the items are i.i.d. and regular. When $n \geq m$, the second bound is optimal for the benchmark introduced in [EFFTW17a] up to constant factors, even when the items are i.i.d. and regular. We further show that, while the Eden et al. benchmark is not necessarily tight in the $n \geq m$ regime, the competition complexity of $n$ bidders with additive valuations over even $2$ i.i.d. regular items is indeed $\omega(1)$. Our main technical contribution is a reduction from analyzing the Eden et al. benchmark to proving stochastic dominance of certain random variables.
Weitzman introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of $n$ alternatives. … Weitzman introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of $n$ alternatives. Several decades later, Doval introduced a close version of the problem, where the searcher does not need to incur the inspection cost of an alternative, and can select it uninspected. Unlike the original problem, the optimal solution to the nonobligatory inspection variant is proved to need adaptivity, and by recent work of [FLL22], finding the optimal solution is NP-hard. Our first main result is a structural characterization of the optimal policy: We show there exists an optimal policy that follows only two different pre-determined orders of inspection, and transitions from one to the other at most once. Our second main result is a polynomial time approximation scheme (PTAS). Our proof involves a novel reduction to a framework developed by [FLX18], utilizing our optimal two-phase structure. Furthermore, we show Pandora's problem with nonobligatory inspection belongs to class NP, which by using the hardness result of [FLL22], settles the computational complexity class of the problem. Finally, we provide a tight 0.8 approximation and a novel proof for committing policies [BK19] (informally, the set of nonadaptive policies) for general classes of distributions, which was previously shown only for discrete and finite distributions [GMS08].
Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, … Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, we consider a game theoretic model of these phenomena introduced by Ostrovsky and Schwarz, and study the loss in social welfare resulting from strategic behavior of the schools, employers, and students. We model grading of students as a game where schools suppress grades in order to improve their students' placements. We also consider the quality loss due to unraveling of the matching market, the strategic behavior of students and employers in offering early contracts with the goal to improve the quality. Our goal is to evaluate if strategic grading or unraveling of the market (or a combination of the two) can cause significant welfare loss compared to the optimal assignment of students to jobs. To measure welfare of the assignment, we assume that welfare resulting from a job -- student pair is a separable and monotone function of student ability and the quality of the jobs. Assuming uniform student quality distribution, we show that the quality loss from the above strategic manipulation is bounded by at most a factor of 2, and give improved bounds for some special cases of welfare functions.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue … The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders. We prove that the competition complexity of $n$ bidders with additive valuations over $m$ independent items is at most $n(\ln(1+m/n)+2)$, and also at most $9\sqrt{nm}$. When $n \leq m$, the first bound is optimal up to constant factors, even when the items are i.i.d. and regular. When $n \geq m$, the second bound is optimal for the benchmark introduced in [EFFTW17a] up to constant factors, even when the items are i.i.d. and regular. We further show that, while the Eden et al. benchmark is not necessarily tight in the $n \geq m$ regime, the competition complexity of $n$ bidders with additive valuations over even $2$ i.i.d. regular items is indeed $\omega(1)$. Our main technical contribution is a reduction from analyzing the Eden et al. benchmark to proving stochastic dominance of certain random variables.
Martin Weitzman's furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not … Martin Weitzman's furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of policies such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a $1-\frac1e = 0.63\ldots$ factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering … We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering two simple pricing schemes: one that imitates Myerson's mechanism via taxation principle, and the other that posts a uniform price. We design a factor revealing LP that crisply captures the approximation factor of this SPM. In the H-units setting, our SPM yields the first improvement in approximation over the correlation-gap-based approximation factor from over eight years ago (Yan 2011). In the position auction setting, our SPM yields the first higher-than 1-1/e approximation factor. In the 1-unit setting, our SPM yields better than the current best known approximation factor for small values of the number of buyers n. Our results on SPMs immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the recent surge of results that show an improvement over 1-1/e for SPMs, ours is the only technique that generalizes beyond the 1-unit setting, and yields the first improvement over long established approximation factors in several settings.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal/Myersonian mechanism via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar et al. 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan (2011) and Chakraborty et al. (2010)). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than 1 − 1/e approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting
Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. … Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. The conjecture still remains open despite substantial partial progress, including constant-competitive algorithms for numerous special cases of matroids, and an $O(\log \log \text{rank})$-competitive algorithm in the general case. Many of these algorithms follow principled frameworks. The limits of these frameworks are previously unstudied, and prior work establishes only that a handful of particular algorithms cannot resolve the matroid secretary conjecture. We initiate the study of impossibility results for frameworks to resolve this conjecture. We establish impossibility results for a natural class of greedy algorithms and for randomized partition algorithms, both of which contain known algorithms that resolve special cases.
In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that … In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that increase their perceived credit-worthiness and others that also increase their true credit-worthiness. A decision-maker would like to define a classification rule with few false-positives (does not give out many bad loans) while yielding many true positives (giving out many good loans), which includes encouraging agents to improve to become true positives if possible. We consider two models for this problem, a general discrete model and a linear model, and prove algorithmic, learning, and hardness results for each. For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting. We also show hardness for the problem of maximizing the number of true positives subject to a nonzero bound on the number of false positives, and that this hardness holds even for a finite-point version of our linear model. We also show that maximizing the number of true positives subject to no false positive is NP-hard in our full linear model. We additionally provide an algorithm that determines whether there exists a linear classifier that classifies all agents accurately and causes all improvable agents to become qualified, and give additional results for low-dimensional data.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, … Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, we consider a game theoretic model of these phenomena introduced by Ostrovsky and Schwarz, and study the loss in social welfare resulting from strategic behavior of the schools, employers, and students. We model grading of students as a game where schools suppress grades in order to improve their students' placements. We also consider the quality loss due to unraveling of the matching market, the strategic behavior of students and employers in offering early contracts with the goal to improve the quality. Our goal is to evaluate if strategic grading or unraveling of the market (or a combination of the two) can cause significant welfare loss compared to the optimal assignment of students to jobs. To measure welfare of the assignment, we assume that welfare resulting from a job -- student pair is a separable and monotone function of student ability and the quality of the jobs. Assuming uniform student quality distribution, we show that the quality loss from the above strategic manipulation is bounded by at most a factor of 2, and give improved bounds for some special cases of welfare functions.
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current … The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
We advance a recently flourishing line of work at the intersection of learning theory and computational economics by studying the learnability of two classes of mechanisms prominent in economics, namely … We advance a recently flourishing line of work at the intersection of learning theory and computational economics by studying the learnability of two classes of mechanisms prominent in economics, namely menus of lotteries and two-part tariffs. The former is a family of randomized mechanisms designed for selling multiple items, known to achieve revenue beyond deterministic mechanisms, while the latter is designed for selling multiple units (copies) of a single item with applications in real-world scenarios such as car or bike-sharing services. We focus on learning high-revenue mechanisms of this form from buyer valuation data in both distributional settings, where we have access to buyers' valuation samples up-front, and the more challenging and less-studied online settings, where buyers arrive one-at-a-time and no distributional assumption is made about their values. Our main contribution is proposing the first online learning algorithms for menus of lotteries and two-part tariffs with strong regret-bound guarantees. In the general case, we provide a reduction to a finite number of experts, and in the limited buyer type case, we show a reduction to online linear optimization, which allows us to obtain no-regret guarantees by presenting buyers with menus that correspond to a barycentric spanner. In addition, we provide algorithms with improved running times over prior work for the distributional settings. The key difficulty when deriving learning algorithms for these settings is that the relevant revenue functions have sharp transition boundaries. In stark contrast with the recent literature on learning such unstructured functions, we show that simple discretization-based techniques are sufficient for learning in these settings.
Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on … Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on imperfect test scores that reflect both skill and effort. Students are limited by a budget on effort, with disadvantaged students having tighter budgets. This raises a challenge for the principal: among agents with similar test scores, it is difficult to distinguish between students with high skills and students with large budgets. Our main result is an optimal stochastic mechanism that maximizes the gains achieved from admitting ``high-skill" students minus the costs incurred from admitting ``low-skill" students when considering two skill types and $n$ budget types. Our mechanism makes it possible to give higher probability of admission to a high-skill student than to a low-skill, even when the low-skill student can potentially get higher test-score due to a higher budget. Further, we extend our admission problem to a setting in which students uniformly receive an exogenous subsidy to increase their budget for effort. This extension can only help the school's admission objective and we show that the optimal mechanism with exogenous subsidies has the same characterization as optimal mechanisms for the original problem.
In 1979, Weitzman introduced Pandora's box problem as a framework for sequential search with costly inspections. Recently, there has been a surge of interest in Pandora's box problem, particularly among … In 1979, Weitzman introduced Pandora's box problem as a framework for sequential search with costly inspections. Recently, there has been a surge of interest in Pandora's box problem, particularly among researchers working at the intersection of economics and computation. This survey provides an overview of the recent literature on Pandora's box problem, including its latest extensions and applications in areas such as market design, decision theory, and machine learning.
Weitzman (1979) introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of n … Weitzman (1979) introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of n alternatives. Several decades later, Doval (2018) introduced a close version of the problem, where the searcher does not need to incur the inspection cost of an alternative, and can select it uninspected. Unlike the original problem, the optimal solution to the nonobligatory inspection variant is proved to need adaptivity by Doval (2018), and by recent work of Fu Li and Liu (2022), finding the optimal solution is NP-hard.
We advance a recently flourishing line of work at the intersection of learning theory and computational economics by studying the learnability of two classes of mechanisms prominent in economics, namely … We advance a recently flourishing line of work at the intersection of learning theory and computational economics by studying the learnability of two classes of mechanisms prominent in economics, namely menus of lotteries and two-part tariffs. The former is a family of randomized mechanisms designed for selling multiple items, known to achieve revenue beyond deterministic mechanisms, while the latter is designed for selling multiple units (copies) of a single item with applications in real-world scenarios such as car or bike-sharing services. We focus on learning high-revenue mechanisms of this form from buyer valuation data in both distributional settings, where we have access to buyers' valuation samples up-front, and the more challenging and less-studied online settings, where buyers arrive one-at-a-time and no distributional assumption is made about their values. Our main contribution is proposing the first online learning algorithms for menus of lotteries and two-part tariffs with strong regret-bound guarantees. In the general case, we provide a reduction to a finite number of experts, and in the limited buyer type case, we show a reduction to online linear optimization, which allows us to obtain no-regret guarantees by presenting buyers with menus that correspond to a barycentric spanner. In addition, we provide algorithms with improved running times over prior work for the distributional settings. The key difficulty when deriving learning algorithms for these settings is that the relevant revenue functions have sharp transition boundaries. In stark contrast with the recent literature on learning such unstructured functions, we show that simple discretization-based techniques are sufficient for learning in these settings.
Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on … Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on imperfect test scores that reflect both skill and effort. Students are limited by a budget on effort, with disadvantaged students having tighter budgets. This raises a challenge for the principal: among agents with similar test scores, it is difficult to distinguish between students with high skills and students with large budgets. Our main result is an optimal stochastic mechanism that maximizes the gains achieved from admitting ``high-skill" students minus the costs incurred from admitting ``low-skill" students when considering two skill types and $n$ budget types. Our mechanism makes it possible to give higher probability of admission to a high-skill student than to a low-skill, even when the low-skill student can potentially get higher test-score due to a higher budget. Further, we extend our admission problem to a setting in which students uniformly receive an exogenous subsidy to increase their budget for effort. This extension can only help the school's admission objective and we show that the optimal mechanism with exogenous subsidies has the same characterization as optimal mechanisms for the original problem.
In 1979, Weitzman introduced Pandora's box problem as a framework for sequential search with costly inspections. Recently, there has been a surge of interest in Pandora's box problem, particularly among … In 1979, Weitzman introduced Pandora's box problem as a framework for sequential search with costly inspections. Recently, there has been a surge of interest in Pandora's box problem, particularly among researchers working at the intersection of economics and computation. This survey provides an overview of the recent literature on Pandora's box problem, including its latest extensions and applications in areas such as market design, decision theory, and machine learning.
We consider the problem of helping agents improve by setting short-term goals. Given a set of target skill levels, we assume each agent will try to improve from their initial … We consider the problem of helping agents improve by setting short-term goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach or do nothing if no target level is within reach. We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and fairness objectives are considered where the agents belong to different underlying populations. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve. This is especially challenging when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we show counterexamples in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.
In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that … In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that increase their perceived credit-worthiness and others that also increase their true credit-worthiness. A decision-maker would like to define a classification rule with few false-positives (does not give out many bad loans) while yielding many true positives (giving out many good loans), which includes encouraging agents to improve to become true positives if possible. We consider two models for this problem, a general discrete model and a linear model, and prove algorithmic, learning, and hardness results for each. For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting. We also show hardness for the problem of maximizing the number of true positives subject to a nonzero bound on the number of false positives, and that this hardness holds even for a finite-point version of our linear model. We also show that maximizing the number of true positives subject to no false positive is NP-hard in our full linear model. We additionally provide an algorithm that determines whether there exists a linear classifier that classifies all agents accurately and causes all improvable agents to become qualified, and give additional results for low-dimensional data.
Weitzman introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of $n$ alternatives. … Weitzman introduced Pandora's box problem as a mathematical model of sequential search with inspection costs, in which a searcher is allowed to select a prize from one of $n$ alternatives. Several decades later, Doval introduced a close version of the problem, where the searcher does not need to incur the inspection cost of an alternative, and can select it uninspected. Unlike the original problem, the optimal solution to the nonobligatory inspection variant is proved to need adaptivity, and by recent work of [FLL22], finding the optimal solution is NP-hard. Our first main result is a structural characterization of the optimal policy: We show there exists an optimal policy that follows only two different pre-determined orders of inspection, and transitions from one to the other at most once. Our second main result is a polynomial time approximation scheme (PTAS). Our proof involves a novel reduction to a framework developed by [FLX18], utilizing our optimal two-phase structure. Furthermore, we show Pandora's problem with nonobligatory inspection belongs to class NP, which by using the hardness result of [FLL22], settles the computational complexity class of the problem. Finally, we provide a tight 0.8 approximation and a novel proof for committing policies [BK19] (informally, the set of nonadaptive policies) for general classes of distributions, which was previously shown only for discrete and finite distributions [GMS08].
Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. … Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. The conjecture still remains open despite substantial partial progress, including constant-competitive algorithms for numerous special cases of matroids, and an $O(\log \log \text{rank})$-competitive algorithm in the general case. Many of these algorithms follow principled frameworks. The limits of these frameworks are previously unstudied, and prior work establishes only that a handful of particular algorithms cannot resolve the matroid secretary conjecture. We initiate the study of impossibility results for frameworks to resolve this conjecture. We establish impossibility results for a natural class of greedy algorithms and for randomized partition algorithms, both of which contain known algorithms that resolve special cases.
How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with … How to optimize posted price mechanisms? The sequential posted-price (SPP) mechanism is one of the widely used selling mechanisms in practice. In this mechanism, the seller presents each buyer with a price sequentially and the buyer can either accept or reject the mechanism's offer. Despite the widespread use of the SPP mechanism, the problem of optimizing prices in this mechanism has not been fully addressed. In a paper entitled, “Improved Revenue Bounds for Posted-Price and Second-Price Mechanisms,” H. Beyhaghi, N. Golrezaei, R. Paes Leme, M. Pal, and B. Sivan construct SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal mechanism and the other that posts a uniform price (same price for every buyer). Their simple pricing rules can be easily generalized to the setting with multiple units and yield the first improvement over long-established approximation factors.
Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. … Babaioff et al. [BIK2007] introduced the matroid secretary problem in 2007, a natural extension of the classic single-choice secretary problem to matroids, and conjectured that a constant-competitive online algorithm exists. The conjecture still remains open despite substantial partial progress, including constant-competitive algorithms for numerous special cases of matroids, and an $O(\log \log \text{rank})$-competitive algorithm in the general case. Many of these algorithms follow principled frameworks. The limits of these frameworks are previously unstudied, and prior work establishes only that a handful of particular algorithms cannot resolve the matroid secretary conjecture. We initiate the study of impossibility results for frameworks to resolve this conjecture. We establish impossibility results for a natural class of greedy algorithms and for randomized partition algorithms, both of which contain known algorithms that resolve special cases.
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current … The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with n buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the optimal/Myersonian mechanism via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar et al. 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan (2011) and Chakraborty et al. (2010)). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than 1 − 1/e approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current … The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue … The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders.
Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher … Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of "committing policies" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a 1-1/e = 0.63... factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
Martin Weitzman's furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not … Martin Weitzman's furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of policies such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a $1-\frac1e = 0.63\ldots$ factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher … Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of "committing policies" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a $1-\frac1e = 0.63\ldots$ factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue … The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders. We prove that the competition complexity of $n$ bidders with additive valuations over $m$ independent items is at most $n(\ln(1+m/n)+2)$, and also at most $9\sqrt{nm}$. When $n \leq m$, the first bound is optimal up to constant factors, even when the items are i.i.d. and regular. When $n \geq m$, the second bound is optimal for the benchmark introduced in [EFFTW17a] up to constant factors, even when the items are i.i.d. and regular. We further show that, while the Eden et al. benchmark is not necessarily tight in the $n \geq m$ regime, the competition complexity of $n$ bidders with additive valuations over even $2$ i.i.d. regular items is indeed $\omega(1)$. Our main technical contribution is a reduction from analyzing the Eden et al. benchmark to proving stochastic dominance of certain random variables.
We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely … We study the fundamental problem of selling a single indivisible item to one of $n$ buyers with independent and potentially nonidentical value distributions. We focus on two simple and widely used selling mechanisms: the second price auction with \emph{eager} personalized reserve prices and the sequential posted price mechanism. Using a new approach, we improve the best-known performance guarantees for these mechanisms. We show that for every value of the number of buyers $n$, the eager second price (ESP) auction and sequential posted price mechanisms respectively earn at least $0.6620$ and $0.6543$ fractions of the optimal revenue. We also provide improved performance guarantees for these mechanisms when the number of buyers is small, which is the more relevant regime for many applications of interest. This in particular implies an improved bound of $0.6543$ for free-order prophet inequalities. Motivated by our improved revenue bounds, we further study the problem of optimizing reserve prices in the ESP auctions when the sorted order of personalized reserve prices among bidders is exogenous. We show that this problem can be solved polynomially. In addition, by analyzing a real auction dataset from Google's advertising exchange, we demonstrate the effectiveness of order-based pricing.
We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering … We study revenue maximization through sequential posted price mechanisms (SPMs) in single-dimensional settings with n buyers, and independent but not necessarily identical value distributions. We construct our SPMs by considering two simple pricing schemes: one that imitates Myerson's mechanism via taxation principle, and the other that posts a uniform price. We design a factor revealing LP that crisply captures the approximation factor of this SPM. In the H-units setting, our SPM yields the first improvement in approximation over the correlation-gap-based approximation factor from over eight years ago (Yan 2011). In the position auction setting, our SPM yields the first higher-than 1-1/e approximation factor. In the 1-unit setting, our SPM yields better than the current best known approximation factor for small values of the number of buyers n. Our results on SPMs immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the recent surge of results that show an improvement over 1-1/e for SPMs, ours is the only technique that generalizes beyond the 1-unit setting, and yields the first improvement over long established approximation factors in several settings.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue … The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders. We prove that the competition complexity of $n$ bidders with additive valuations over $m$ independent items is at most $n(\ln(1+m/n)+2)$, and also at most $9\sqrt{nm}$. When $n \leq m$, the first bound is optimal up to constant factors, even when the items are i.i.d. and regular. When $n \geq m$, the second bound is optimal for the benchmark introduced in [EFFTW17a] up to constant factors, even when the items are i.i.d. and regular. We further show that, while the Eden et al. benchmark is not necessarily tight in the $n \geq m$ regime, the competition complexity of $n$ bidders with additive valuations over even $2$ i.i.d. regular items is indeed $\omega(1)$. Our main technical contribution is a reduction from analyzing the Eden et al. benchmark to proving stochastic dominance of certain random variables.
We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering … We study revenue maximization through sequential posted-price (SPP) mechanisms in single-dimensional settings with $n$ buyers and independent but not necessarily identical value distributions. We construct the SPP mechanisms by considering the best of two simple pricing rules: one that imitates the revenue optimal mchanism, namely the Myersonian mechanism, via the taxation principle and the other that posts a uniform price. Our pricing rules are rather generalizable and yield the first improvement over long-established approximation factors in several settings. We design factor-revealing mathematical programs that crisply capture the approximation factor of our SPP mechanism. In the single-unit setting, our SPP mechanism yields a better approximation factor than the state of the art prior to our work (Azar, Chiplunkar & Kaplan, 2018). In the multi-unit setting, our SPP mechanism yields the first improved approximation factor over the state of the art after over nine years (Yan, 2011 and Chakraborty et al., 2010). Our results on SPP mechanisms immediately imply improved performance guarantees for the equivalent free-order prophet inequality problem. In the position auction setting, our SPP mechanism yields the first higher-than $1-1/e$ approximation factor. In eager second-price (ESP) auctions, our two simple pricing rules lead to the first improved approximation factor that is strictly greater than what is obtained by the SPP mechanism in the single-unit setting.
We study the outcome of deferred acceptance when prospective medical residents can only apply to a limited set of hospitals. This limitation requires residents to make a strategic choice about … We study the outcome of deferred acceptance when prospective medical residents can only apply to a limited set of hospitals. This limitation requires residents to make a strategic choice about the quality of hospitals they apply to. Through a mix of theoretical and experimental results, we study the effect of this strategic choice on the preferences submitted by participants, as well as on the overall welfare. We find that residents' choices in our model mimic the behavior observed in real systems where individuals apply to a mix of positions consisting mostly of places where they are reasonably likely to get accepted, as well as a few "reach" applications to hospitals of very high quality, and a few "safe" applications to hospitals of lower than their expected level. Surprisingly, the number of such "safe" applications is not monotone in the number of allowed applications. We also find that selfish behavior can hurt social welfare, but the deterioration of overall welfare is very minimal.
Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, … Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, we consider a game theoretic model of these phenomena introduced by Ostrovsky and Schwarz, and study the loss in social welfare resulting from strategic behavior of the schools, employers, and students. We model grading of students as a game where schools suppress grades in order to improve their students' placements. We also consider the quality loss due to unraveling of the matching market, the strategic behavior of students and employers in offering early contracts with the goal to improve the quality. Our goal is to evaluate if strategic grading or unraveling of the market (or a combination of the two) can cause significant welfare loss compared to the optimal assignment of students to jobs. To measure welfare of the assignment, we assume that welfare resulting from a job -- student pair is a separable and monotone function of student ability and the quality of the jobs. Assuming uniform student quality distribution, we show that the quality loss from the above strategic manipulation is bounded by at most a factor of 2, and give improved bounds for some special cases of welfare functions.
Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, … Strategic suppression of grades, as well as early offers and contracts, are well-known phenomena in the matching process where graduating students apply to jobs or further education. In this paper, we consider a game theoretic model of these phenomena introduced by Ostrovsky and Schwarz, and study the loss in social welfare resulting from strategic behavior of the schools, employers, and students. We model grading of students as a game where schools suppress grades in order to improve their students' placements. We also consider the quality loss due to unraveling of the matching market, the strategic behavior of students and employers in offering early contracts with the goal to improve the quality. Our goal is to evaluate if strategic grading or unraveling of the market (or a combination of the two) can cause significant welfare loss compared to the optimal assignment of students to jobs. To measure welfare of the assignment, we assume that welfare resulting from a job -- student pair is a separable and monotone function of student ability and the quality of the jobs. Assuming uniform student quality distribution, we show that the quality loss from the above strategic manipulation is bounded by at most a factor of 2, and give improved bounds for some special cases of welfare functions.
Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. … Algorithmic pricing is the computational problem that sellers (e.g.,in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami etal. (SODA, 2005) proposed this problem and gave logarithmic approximations (in the number of consumers) for the unit-demand and single-parameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have poly-logarithmic in approximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multi-parameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unit-demand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for aset of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer's valuations for the different items are independent random variables. Our main result is a constant approximation algorithm for this problem that makes use of an interesting connection between this problem and the concept of virtual valuations from the single-parameter Bayesian optimal mechanism design literature.
We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S, T, v(S ∪ … We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S, T, v(S ∪ T) < v(S) + v(T)) and complements (i.e., for some S, T, v(S ∪ T) > v(S) + v(T)). We show that the mechanism first proposed by Babaioff et al. [2014] -- the better of selling the items separately and bundling them together -- guarantees a Θ(d) fraction of the optimal revenue, where $d$ is a measure on the degree of complementarity. Note that this is the first approximately optimal mechanism for a buyer whose valuation exhibits any kind of complementarity. It extends the work of Rubinstein and Weinberg [2015], which proved that the same simple mechanisms achieve a constant factor approximation when buyer valuations are subadditive, the most general class of complement-free valuations.
We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is … We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer and if the value distribution conditioning on each signal is regular, the gap between the two is bounded by a multiplicative factor of 3. We give examples showing that both conditions are necessary for a constant bound on the gap to hold.We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if the distribution is jointly regular. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory … The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [9], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1, i.e., it is better to recruit one extra buyer and run a second price auction than to learn exactly the buyers' underlying distribution and run the revenue-maximizing auction tailored to this distribution. In this paper we study the Competition Complexity of dynamic auctions. Consider the following setting: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. We prove that the Competition Complexity of dynamic auctions is at most 3n, and at least linear in n, even when the buyers' values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. We also prove results on the number of additional buyers necessary for VCG at every stage to be an {\alpha}-approximation of the optimal revenue; we term this number the {\alpha}-approximate Competition Complexity. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first non-trivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages. A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of Cai et al. [12] to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.
We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly … We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that - unlike in the single item case - the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.
We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn … We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013; Babaioff et al.2014].
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined … Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, … We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
We study the problem of maximizing a stochastic monotone submodular function with respect to a matroid constraint. Because of the presence of diminishing marginal values in real-world problems, our model … We study the problem of maximizing a stochastic monotone submodular function with respect to a matroid constraint. Because of the presence of diminishing marginal values in real-world problems, our model can capture the effect of stochasticity in a wide range of applications. We show that the adaptivity gap—the ratio between the values of optimal adaptive and optimal nonadaptive policies—is bounded and is equal to e/(e − 1). We propose a polynomial-time nonadaptive policy that achieves this bound. We also present an adaptive myopic policy that obtains at least half of the optimal value. Furthermore, when the matroid is uniform, the myopic policy achieves the optimal approximation ratio of 1 − 1/e. This paper was accepted by Dimitris Bertsimas and Yinyu Ye, optimization.
Investigating potential purchases is often a substantial investment under uncertainty. Standard market designs, such as simultaneous or English auctions, compound this with uncertainty about the price a bidder will have … Investigating potential purchases is often a substantial investment under uncertainty. Standard market designs, such as simultaneous or English auctions, compound this with uncertainty about the price a bidder will have to pay in order to win. As a result they tend to confuse the process of search both by leading to wasteful information acquisition on goods that have already found a good purchaser and by discouraging needed investigations of objects, potentially eliminating all gains from trade. In contrast, we show that the Dutch auction preserves all of its properties from a standard setting without information costs because it guarantees, at the time of information acquisition, a price at which the good can be purchased. Calibrations to start-up acquisition and timber auctions suggest that in practice the social losses through poor search coordination in standard formats are an order of magnitude or two larger than the (negligible) inefficiencies arising from ex-ante bidder asymmetries.
We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived … We characterize optimal mechanisms for the multiple-good monopoly problem and provide a framework to find them.We show that a mechanism is optimal if and only if a measure µ derived from the buyer's type distribution satisfies certain stochastic dominance conditions.This measure expresses the marginal change in the seller's revenue under marginal changes in the rent paid to subsets of buyer types.As a corollary, we characterize the optimality of grand-bundling mechanisms, strengthening several results in the literature, where only sufficient optimality conditions have been derived.As an application, we show that the optimal mechanism for n independent uniform items each supported on [c, c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for 2 items [Pav11].At the same time, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c, c + 1] is not a grand bundling mechanism.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a … We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for … We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.
Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for … Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint.In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.
A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over $m$ items, the revenue-maximizing mechanism … A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over $m$ items, the revenue-maximizing mechanism is prohibitively complex. This problem has been addressed using two main approaches: (i) Approximation: the best of two simple mechanisms (sell each item separately, or sell all the items as one bundle) gives $1/6$ of the optimal revenue [BILW14]. (ii) Enhanced competition: running the simple VCG mechanism with additional $m$ buyers extracts at least the optimal revenue in the original market [EFFTW17]. Both approaches, however, suffer from severe drawbacks: On the one hand, losing $83\%$ of the revenue is hardly acceptable in any application. On the other hand, attracting a linear number of new buyers may be prohibitive. Our main result is that by combining the two approaches one can achieve the best of both worlds. Specifically, for any constant $\epsilon$ one can obtain a $(1-\epsilon)$ fraction of the optimal revenue by running simple mechanisms --- either selling each item separately or selling all items as a single bundle --- with substantially fewer additional buyers: logarithmic, constant, or even none in some cases.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see … In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 − 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multi-threshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule. A long line of work has therefore sought to counteract strategic behavior … Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule. A long line of work has therefore sought to counteract strategic behavior by designing more conservative decision boundaries in an effort to increase robustness to the effects of strategic covariate shift.
When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval. Models of agent responsiveness, termed "strategic manipulation," … When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval. Models of agent responsiveness, termed "strategic manipulation," analyze the interaction between a learner and agents in a world where all agents are equally able to manipulate their features in an attempt to "trick" a published classifier. In cases of real world classification, however, an agent's ability to adapt to an algorithm is not simply a function of her personal interest in receiving a positive classification, but is bound up in a complex web of social factors that affect her ability to pursue certain action responses. In this paper, we adapt models of strategic manipulation to capture dynamics that may arise in a setting of social inequality wherein candidate groups face different costs to manipulation. We find that whenever one group's costs are higher than the other's, the learner's equilibrium strategy exhibits an inequality-reinforcing phenomenon wherein the learner erroneously admits some members of the advantaged group, while erroneously excluding some members of the disadvantaged group. We also consider the effects of interventions in which a learner subsidizes members of the disadvantaged group, lowering their costs in order to improve her own classification performance. Here we encounter a paradoxical result: there exist cases in which providing a subsidy improves only the learner's utility while actually making both candidate groups worse-off--even the group receiving the subsidy. Our results reveal the potentially adverse social ramifications of deploying tools that attempt to evaluate an individual's "quality" when agents' capacities to adaptively respond differ.
Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine … Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.
Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher … Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of "committing policies" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a 1-1/e = 0.63... factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the … We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the submitted bids of n buyers in a set of auctions and the goal is to return personalized reserve prices r that maximize the revenue earned on these auctions by running eager second price auctions with reserve r. We present a novel LP formulation to this problem and a rounding procedure which achieves a (1+2(√2-1)e√2-2)-1≅0.684-approximation. This improves over the 1/2-approximation Algorithm due to Roughgarden and Wang. We show that our analysis is tight for this rounding procedure. We also bound the integrality gap of the LP, which bounds the performance of any algorithm based on this LP.
We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the … We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the assumption that the buyers' values are additive subject to a feasibility constraint and independent across items. Importantly, different buyers in our setting can have different constraints on the sets of items they desire. Our mechanism is a sequential variant of two-part tariffs. Prior to our work, simple approximation mechanisms for such multi-buyer problems were known only for the special cases of all unit-demand or all additive value buyers. Our work expands upon and unifies long lines of work on unit-demand settings and additive settings. We employ the ex ante relaxation approach developed by Alaei (2011) for reducing a multiple-buyer mechanism design problem with an ex post supply constraint into single-buyer ones with ex ante supply constraints. Solving the single-agent problems requires us to significantly extend techniques developed in the context of additive values by Li and Yao (2013) and their extension to subadditive values by Rubinstein and Weinberg (2015).
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, … We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, the learner deploys a classifier, then an adversarially chosen agent arrives and possibly manipulates her features to optimally respond to the learner's choice of classifier. The learner has no knowledge of the agents' utility functions or "real" features, which may vary widely across agents. Instead, the learner is only able to observe their "revealed preferences", i.e., the manipulated feature vectors they provide. For a broad family of agent cost functions, we give a computationally efficient learning algorithm that is able to obtain diminishing "Stackelberg regret" --- a form of policy regret that guarantees that the learner is realizing loss nearly as small as that of the best classifier in hindsight, even allowing for the fact that agents would have best-responded differently to the optimal classifier.
Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To … Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay.Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility/disutility?A classical example of the above setting is Weitzman's “Pandora's box” problem where we are given probability distributions on values of n independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our utility/disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters.
The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory … The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [11], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning exactly the buyers' underlying distribution and run the revenue-maximizing auction tailored to this distribution.In this paper we study the Competition Complexity of dynamic auctions. Consider the following problem: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction? We prove that the Competition Complexity of dynamic auctions is at most 3n - and at least linear in n - even when the buyers' values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an α-approximation of the optimal revenue; we term this number the α-approximate Competition Complexity. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a -approximation. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first nontrivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages.A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of [14] to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.
In the ordinal Matroid Secretary Problem (MSP), elements from a weighted matroid are presented in random order to an algorithm that must incrementally select a large weight independent set. However, … In the ordinal Matroid Secretary Problem (MSP), elements from a weighted matroid are presented in random order to an algorithm that must incrementally select a large weight independent set. However, the algorithm can only compare pairs of revealed elements without using its numerical value. An algorithm is α probability-competitive if every element from the optimum appears with probability 1/α in the output. We present a technique to design algorithms with strong probability-competitive ratios, improving the guarantees for almost every matroid class considered in the literature: e.g., we get ratios of 4 for graphic matroids (improving on 2e by Korula and Pál [ICALP 2009]) and of 5.19 for laminar matroids (improving on 9.6 by Ma et al. [THEOR COMPUT SYST 2016]). We also obtain new results for superclasses of k column sparse matroids, for hypergraphic matroids, certain gammoids and graph packing matroids, and a probability-competitive algorithm for uniform matroids of rank ρ based on Kleinberg's utility-competitive algorithm [SODA 2005] for that class. Our second contribution are algorithms for the ordinal MSP on arbitrary matroids of rank ρ. We devise an O(log ρ) probability-competitive algorithm and an O(log log ρ) ordinal-competitive algorithm, a weaker notion of competitiveness but stronger than the utility variant. These are based on the O(log log ρ) utility-competitive algorithm by Feldman et al. [SODA 2015].
A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over m items, the revenue-maximizing mechanism … A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over m items, the revenue-maximizing mechanism is prohibitively complex. This problem has been addressed using two main approaches: Approximation: the best of two simple mechanisms (sell each item separately, or sell all the items as one bundle) gives 1/6 of the optimal revenue [1]. Enhanced competition: running the simple VCG mechanism with additional m buyers extracts at least the optimal revenue in the original market [17]. Both approaches, however, suffer from severe drawbacks: On the one hand, losing 83% of the revenue is hardly acceptable in any application. On the other hand, attracting a linear number of new buyers may be prohibitive. We show that by combining the two approaches one can achieve the best of both worlds. Specifically, for any constant ε one can obtain a (1-ε) fraction of the optimal revenue by running simple mechanisms --- either selling each item separately or selling all items as a single bundle --- with substantially fewer additional buyers: logarithmic, constant, or even none in some cases.
In the classical contextual bandits problem, in each round $t$, a learner observes some context $c$, chooses some action $a$ to perform, and receives some reward $r_{a,t}(c)$. We consider the … In the classical contextual bandits problem, in each round $t$, a learner observes some context $c$, chooses some action $a$ to perform, and receives some reward $r_{a,t}(c)$. We consider the variant of this problem where in addition to receiving the reward $r_{a,t}(c)$, the learner also learns the values of $r_{a,t}(c')$ for all other contexts $c'$; i.e., the rewards that would have been achieved by performing that action under different contexts. This variant arises in several strategic settings, such as learning how to bid in non-truthful repeated auctions (in this setting the context is the decision maker's private valuation for each auction). We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classical contextual bandits problem achieve $\tilde{O}(\sqrt{CKT})$ regret against all stationary policies, where $C$ is the number of contexts, $K$ the number of actions, and $T$ the number of rounds. We demonstrate algorithms for the contextual bandits problem with cross-learning that remove the dependence on $C$ and achieve regret $\tilde{O}(\sqrt{KT})$ (when contexts are stochastic with known distribution), $\tilde{O}(K^{1/3}T^{2/3})$ (when contexts are stochastic with unknown distribution), and $\tilde{O}(\sqrt{KT})$ (when contexts are adversarial but rewards are stochastic). We simulate our algorithms on real auction data from an ad exchange running first-price auctions (showing that they outperform traditional contextual bandit algorithms).
We consider an online regression setting in which individuals adapt to the regression model: arriving individuals may access the model throughout the process, and invest strategically in modifying their own … We consider an online regression setting in which individuals adapt to the regression model: arriving individuals may access the model throughout the process, and invest strategically in modifying their own features so as to improve their assigned score. We find that this strategic manipulation may help a learner recover the causal variables, in settings where an agent can invest in improving impactful features that also improve his true label. We show that even simple behavior on the learner's part (i.e., periodically updating her model based on the observed data so far, via least-square regression) allows her to simultaneously i) accurately recover which features have an impact on an agent's true label, provided they have been invested in significantly, and ii) incentivize agents to invest in these impactful features, rather than in features that have no effect on their true label.
Motivated by applications such as college admission and insurance rate determination, we study a classification problem where the inputs are controlled by strategic individuals who can modify their features at … Motivated by applications such as college admission and insurance rate determination, we study a classification problem where the inputs are controlled by strategic individuals who can modify their features at a cost. A learner can only partially observe the features, and aims to classify individuals with respect to a quality score. The goal is to design a classification mechanism that maximizes the overall quality score in the population, taking any strategic updating into account. When scores are linear and mechanisms can assign their own scores to agents, we show that the optimal classifier is an appropriate projection of the quality score. For the more restrictive task of binary classification via linear thresholds, we construct a (1/4)-approximation to the optimal classifier when the underlying feature distribution is sufficiently smooth and admits an oracle for finding dense regions. We extend our results to settings where the prior distribution is unknown and must be learned from samples.
The Pandora's Box problem and its extensions capture optimization problems with stochastic input where the algorithm can obtain instantiations of input random variables at some cost. To our knowledge, all … The Pandora's Box problem and its extensions capture optimization problems with stochastic input where the algorithm can obtain instantiations of input random variables at some cost. To our knowledge, all previous work on this class of problems assumes that different random variables in the input are distributed independently. As such it does not capture many real-world settings. In this paper, we provide the first approximation algorithms for Pandora's Box-type problems with correlations. We assume that the algorithm has access to samples drawn from the joint distribution on input. Algorithms for these problems must determine an order in which to probe random variables, as well as when to stop and return the best solution found so far. In general, an optimal algorithm may make both decisions adaptively based on instantiations observed previously. Such fully adaptive (FA) strategies cannot be efficiently approximated to within any sub-linear factor with sample access. We therefore focus on the simpler objective of approximating partially adaptive (PA) strategies that probe random variables in a fixed predetermined order but decide when to stop based on the instantiations observed. We consider a number of different feasibility constraints and provide simple PA strategies that are approximately optimal with respect to the best PA strategy for each case. All of our algorithms have polynomial sample complexity. We further show that our results are tight within constant factors: better factors cannot be achieved even using the full power of FA strategies.
Our main contribution is a general framework to design efficient polynomial time approximation schemes (EPTAS) for fundamental stochastic combinatorial optimization problems. Given an error parameter ε>0, such algorithmic schemes attain … Our main contribution is a general framework to design efficient polynomial time approximation schemes (EPTAS) for fundamental stochastic combinatorial optimization problems. Given an error parameter ε>0, such algorithmic schemes attain a (1-ε)-approximation in t(ε)· poly(n) time, where t(·) is some function that depends only on ε. Technically speaking, our approach relies on presenting tailor-made reductions to a newly-introduced multi-dimensional load balancing problem. Even though the single-dimensional problem is already known to be APX-Hard, we prove that an EPTAS can be designed under certain structural assumptions, which hold for each of our applications. To demonstrate the versatility of our framework, we first study selection-stopping settings to derive an EPTAS for the Free-Order Prophets problem [Agrawal et al., EC'20] and for its cost-driven generalization, Pandora's Box with Commitment [Fu et al., ICALP'18]. These results constitute the first approximation schemes in the non-adaptive setting and improve on known inefficient polynomial time approximation schemes (PTAS) for their adaptive variants. Next, turning our attention to stochastic probing problems, we obtain an EPTAS for the adaptive ProbeMax problem as well as for its non-adaptive counterpart; in both cases, state-of-the-art approximability results have been inefficient PTASes [Chen et al., NIPS'16; Fu et al., ICALP'18].
Weitzman (1979) introduced the Pandora Box problem as a model for sequential search with inspection costs, and gave an elegant index-based policy that attains provably optimal expected payoff. In various … Weitzman (1979) introduced the Pandora Box problem as a model for sequential search with inspection costs, and gave an elegant index-based policy that attains provably optimal expected payoff. In various scenarios, the searching agent may select an option without making a costly inspection. The variant of the Pandora box problem with non-obligatory inspection has attracted interest from both economics and algorithms researchers. Various simple algorithms have proved suboptimal, with the best known 0.8-approximation algorithm due to Guha et al. (2008). No hardness result for the problem was known.
We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds … We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem - a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [14] shows that randomness offers no benefit - the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al.[6] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call the common base value model) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.
In the Matroid Secretary Problem (MSP), the elements of the ground set of a Matroid are revealed on-line one by one, each together with its value. An algorithm for the … In the Matroid Secretary Problem (MSP), the elements of the ground set of a Matroid are revealed on-line one by one, each together with its value. An algorithm for the MSP is called Matroid-Unknown if, at every stage of its execution, it only knows (i) the elements that have been revealed so far and their values and (ii) an oracle for testing whether or not a subset the elements that have been revealed so far forms an independent set. An algorithm is called Known-Cardinality if it knows (i), (ii) and also knows from the start the cardinality n of the ground set of the Matroid. We present here a Known-Cardinality algorithm with a competitive-ratio of order log log the rank of the Matroid. The prior known results for a OC algorithm are a competitive-ratio of log the rank of the Matroid, by Babaioff et al. (2007), and a competitive-ratio of square root of log the rank of the Matroid, by Chakraborty and Lachish (2012).
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and … Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and if $T_n$ is the set of stop rules for $X_1, \cdots, X_n$, then $E(\max\{X_1, \cdots, X_n\}) \leq a_n \sup\{EX_t: t \in T_n\}$, and the bound $a_n$ is best possible. Similar universal constants $0 < b_n < \frac{1}{4}$ are found so that if the $\{X_i\}$ are i.i.d. random variables taking values only in $\lbrack a, b\rbrack$, then $E(\max\{X_1, \cdots, X_n\}) \leq \sup\{EX_t: t \in T_n\} + b_n(b - a)$, where again the bound $b_n$ is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
(1985). Ms. Machiavelli and the Stable Matching Problem. The American Mathematical Monthly: Vol. 92, No. 4, pp. 261-268. (1985). Ms. Machiavelli and the Stable Matching Problem. The American Mathematical Monthly: Vol. 92, No. 4, pp. 261-268.
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the … We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.