Author Description

Login to generate an author description

Ask a Question About This Mathematician

We present an analysis of the statistical properties and growth of the free on-line encyclopedia Wikipedia. By describing topics by vertices and hyperlinks between them as edges, we can represent … We present an analysis of the statistical properties and growth of the free on-line encyclopedia Wikipedia. By describing topics by vertices and hyperlinks between them as edges, we can represent this encyclopedia as a directed graph. The topological properties of this graph are in close analogy with those of the World Wide Web, despite the very different growth mechanism. In particular, we measure a scale-invariant distribution of the in and out degree and we are able to reproduce these features by means of a simple statistical model. As a major consequence, Wikipedia growth can be described by local rules such as the preferential attachment mechanism, though users, who are responsible of its evolution, can act globally on the network.
We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) … We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) all require exact knowledge of the processing time of each job. This assumption is crucial, as even a slight perturbation of the processing time would lead to polynomial competitive ratio. However, this assumption very rarely holds in real-life scenarios.
We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of … We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of items, and the aim of a mechanism is to improve the social welfare by arranging purchases and sales of the items. A mechanism is given prior distributions on the agents' valuations of the items, but not the actual valuations; thus the aim is to maximise the expected social welfare over these distributions. As in previous work, we are interested in the worst-case ratio between the social welfare achieved by a truthful mechanism, and the best social welfare possible.
We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing … We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over T rounds of bidding with the set of arms given by the set of distinct bidding m-tuples, where m is the number of platforms. We modify the algorithm proposed in Badanidiyuru et al., [11] to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret , where OPT is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is . When restricted to this special-case, this bound improves over Sankararaman and Slivkins [34] in the regime OPT < < T, as is the case in the particular application at hand. Second, we show an lower bound for the discrete case and an Ω(m1/3B2/3) lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations?
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83-approximation and runs in O(n log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms. The versatility of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a (9 + ε)-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
Although freelancing work has grown substantially in recent years, in part facilitated by a number of online labor marketplaces, %(e.g., Guru, Freelancer, Amazon Mechanical Turk), traditional forms of "in-sourcing" work … Although freelancing work has grown substantially in recent years, in part facilitated by a number of online labor marketplaces, %(e.g., Guru, Freelancer, Amazon Mechanical Turk), traditional forms of "in-sourcing" work continue being the dominant form of employment. % in most companies. This means that, at least for the time being, freelancing and salaried employment will continue to co-exist. In this paper, we provide algorithms for outsourcing and hiring workers in a general setting, where workers form a team and contribute different skills to perform a task. We call this model team formation with outsourcing. In our model, tasks arrive in an online fashion: neither the number nor the composition of the tasks are known a-priori. At any point in time, there is a team of hired workers who receive a fixed salary independently of the work they perform. This team is dynamic: new members can be hired and existing members can be fired, at some cost. Additionally, some parts of the arriving tasks can be outsourced and thus completed by non-team members, at a premium. Our contribution is an efficient online cost-minimizing algorithm for hiring and firing team members and outsourcing tasks. We present theoretical bounds obtained using a primal--dual scheme proving that our algorithms have logarithmic competitive approximation ratio. We complement these results with experiments using semi-synthetic datasets based on actual task requirements and worker skills from three large online labor marketplaces.
Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold … Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. Despite the simplicity of this problem, a classical result by Myerson and Satterthwaite (1983) affirms the impossibility of designing a mechanism which is simultaneously efficient, incentive compatible, individually rational, and budget balanced. This impossibility result fostered an intense investigation of meaningful trade-offs between these desired properties. Much work has focused on approximately efficient fixed-price mechanisms, i.e., Blumrosen and Dobzinski (2014; 2016), Colini-Baldeschi et al. (2016), which have been shown to fully characterize strong budget balanced and ex-post individually rational direct revelation mechanisms. All these results, however, either assume some knowledge on the priors of the seller/buyer valuations, or a black box access to some samples of the distributions, as in D{\"u}tting et al. (2021). In this paper, we cast for the first time the bilateral trade problem in a regret minimization framework over rounds of seller/buyer interactions, with no prior knowledge on the private seller/buyer valuations. Our main contribution is a complete characterization of the regret regimes for fixed-price mechanisms with different models of feedback and private valuations, using as benchmark the best fixed price in hindsight. More precisely, we prove the following bounds on the regret: $\bullet$ $\widetilde{\Theta}(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms); $\bullet$ $\widetilde{\Theta}(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms) and independent seller/buyer valuations with bounded densities; $\bullet$ $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded densities; $\bullet$ $\Theta(T)$ for realistic feedback and independent seller/buyer valuations; $\bullet$ $\Theta(T)$ for the adversarial setting.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Distortion-Oblivious Algorithms for Minimizing Flow TimeYossi Azar, Stefano Leonardi, and Noam TouitouYossi Azar, Stefano … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Distortion-Oblivious Algorithms for Minimizing Flow TimeYossi Azar, Stefano Leonardi, and Noam TouitouYossi Azar, Stefano Leonardi, and Noam Touitoupp.252 - 274Chapter DOI:https://doi.org/10.1137/1.9781611977073.13PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was introduced: for every distortion factor μ, an O(μ2)-competitive algorithm ALGμ which handles distortions up to μ was presented. However, using that result requires one to know the distortion of the input in advance, which is impractical. We present the first distortion-oblivious algorithms: algorithms which are competitive for every input of every distortion, and thus do not require knowledge of the distortion in advance. Moreover, the competitive ratios of our algorithms are Õ(μ), which is a quadratic improvement over the algorithm from STOC 2021, and is nearly optimal (we show a randomized lower bound of Ω(μ) on competitiveness). Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study the revenue performance of sequential posted price mechanisms and some natural extensions, for a general setting where the valuations of the buyers are drawn from a correlated distribution. … We study the revenue performance of sequential posted price mechanisms and some natural extensions, for a general setting where the valuations of the buyers are drawn from a correlated distribution. Sequential posted price mechanisms are conceptually simple mechanisms that work by proposing a take-it-or-leave-it offer to each buyer. We apply sequential posted price mechanisms to single-parameter multi-unit settings in which each buyer demands only one item and the mechanism can assign the service to at most k of the buyers. For standard sequential posted price mechanisms, we prove that with the valuation distribution having finite support, no sequential posted price mechanism can extract a constant fraction of the optimal expected revenue, even with unlimited supply. We extend this result to the the case of a continuous valuation distribution when various standard assumptions hold simultaneously. In fact, it turns out that the best fraction of the optimal revenue that is extractable by a sequential posted price mechanism is proportional to ratio of the highest and lowest possible valuation. We prove that for two simple generalizations of these mechanisms, a better revenue performance can be achieved: if the sequential posted price mechanism has for each buyer the option of either proposing an offer or asking the buyer for its valuation, then a Omega(1/max{1,d}) fraction of the optimal revenue can be extracted, where d denotes the degree of dependence of the valuations, ranging from complete independence (d=0) to arbitrary dependence (d=n-1). Moreover, when we generalize the sequential posted price mechanisms further, such that the mechanism has the ability to make a take-it-or-leave-it offer to the i-th buyer that depends on the valuations of all buyers except i's, we prove that a constant fraction (2-sqrt{e})/4~0.088 of the optimal revenue can be always be extracted.
Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing … Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing a general method to reduce bias in the data through random projections in a ``fair'' subspace. We apply this method to densest subgraph problem. For densest subgraph, our approach based on fair projections allows to recover both theoretically and empirically an almost optimal, fair, dense subgraph hidden in the input data. We also show that, under the small set expansion hypothesis, approximating this problem beyond a factor of 2 is NP-hard and we show a polynomial time algorithm with a matching approximation bound.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Single-Sample Prophet Inequalities via Greedy-Ordered SelectionConstantine Caramanis, Paul Dütting, Matthew Faw, Federico Fusco, Philip … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Single-Sample Prophet Inequalities via Greedy-Ordered SelectionConstantine Caramanis, Paul Dütting, Matthew Faw, Federico Fusco, Philip Lazos, Stefano Leonardi, Orestis Papadigenopoulos, Emmanouil Pountourakis, and Rebecca ReiffenhäuserConstantine Caramanis, Paul Dütting, Matthew Faw, Federico Fusco, Philip Lazos, Stefano Leonardi, Orestis Papadigenopoulos, Emmanouil Pountourakis, and Rebecca Reiffenhäuserpp.1298 - 1325Chapter DOI:https://doi.org/10.1137/1.9781611977073.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single choice problem [Rubinstein et al., 2020], most existing SSPI results were obtained via an elegant, but inherently lossy reduction to order-oblivious secretary (OOS) policies [Azar et al., 2014]. Motivated by this discrepancy, we develop an intuitive and versatile greedy-based technique that yields SSPIs directly rather than through the reduction to OOSs. Our results can be seen as generalizing and unifying a number of existing results in the area of prophet and secretary problems. Our algorithms significantly improve on the competitive guarantees for a number of interesting scenarios (including general matching with edge arrivals, bipartite matching with vertex arrivals, and certain matroids), and capture new settings (such as budget additive combinatorial auctions). Complementing our algorithmic results, we also consider mechanism design variants. Finally, we analyze the power and limitations of different SSPI approaches by providing a partial converse to the reduction from SSPI to OOS given by Azar et al. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
In a sponsored search auction the advertisement slots on a search result page are generally ordered by click-through rate. Bidders have a valuation, which is usually assumed to be linear … In a sponsored search auction the advertisement slots on a search result page are generally ordered by click-through rate. Bidders have a valuation, which is usually assumed to be linear in the click-through rate, a budget constraint, and receive at most one slot per search result page (round). We study multi-round sponsored search auctions, where the different rounds are linked through the budget constraints of the bidders and the valuation of a bidder for all rounds is the sum of the valuations for the individual rounds. All mechanisms published so far either study one-round sponsored search auctions or the setting where every round has only one slot and all slots have the same click-through rate, which is identical to a multi-item auction. This paper contains the following three results: (1) We give the first mechanism for the multi-round sponsored search problem where different slots have different click-through rates. Our mechanism is incentive compatible in expectation, individually rational in expectation, Pareto optimal in expectation, and also ex-post Pareto optimal for each realized outcome. (2) Additionally we study the combinatorial setting, where each bidder is only interested in a subset of the rounds. We give a deterministic, incentive compatible, individually rational, and Pareto optimal mechanism for the setting where all slots have the same click-through rate. (3) We present an impossibility result for auctions where bidders have diminishing marginal valuations. Specifically, we show that even for the multi-unit (one slot per round) setting there is no incentive compatible, individually rational, and Pareto optimal mechanism for private diminishing marginal valuations and public budgets.
We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of … We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of items, and the aim of a mechanism is to improve the social welfare by arranging purchases and sales of the items. A mechanism is given prior distributions on the agents’ valuations of the items, but not the actual valuations; thus, the aim is to maximise the expected social welfare over these distributions. As in previous work, we are interested in the worst-case ratio between the social welfare achieved by a truthful mechanism and the best social welfare possible. Our main result is an incentive compatible and budget balanced constant-factor approximation mechanism in a setting where buyers have XOS valuations and sellers’ valuations are additive. This is the first such approximation mechanism for a two-sided market setting where the agents have combinatorial valuation functions. To achieve this result, we introduce a more general kind of demand query that seems to be needed in this situation. In the simpler case that sellers have unit supply (each having just one item to sell), we give a new mechanism whose welfare guarantee improves on a recent one in the literature. We also introduce a more demanding version of the strong budget balance (SBB) criterion, aimed at ruling out certain “unnatural” transactions satisfied by SBB. We show that the stronger version is satisfied by our mechanisms.
Incentive compatibility (IC) is a desirable property for any auction mechanism, including those used in online advertising. However, in real world applications practical constraints and complex environments often result in … Incentive compatibility (IC) is a desirable property for any auction mechanism, including those used in online advertising. However, in real world applications practical constraints and complex environments often result in mechanisms that lack incentive compatibility. Recently, several papers investigated the problem of deploying black-box statistical tests to determine if an auction mechanism is incentive compatible by using the notion of IC-Regret that measures the regret of a truthful bidder. Unfortunately, most of those methods are computationally intensive, since they require the execution of many counterfactual experiments.
We study the revenue performance of sequential posted-price mechanisms and some natural extensions for a setting where the valuations of the buyers are drawn from a correlated distribution. Sequential posted-price … We study the revenue performance of sequential posted-price mechanisms and some natural extensions for a setting where the valuations of the buyers are drawn from a correlated distribution. Sequential posted-price mechanisms are conceptually simple mechanisms that work by proposing a “take-it-or-leave-it” offer to each buyer. We apply sequential posted-price mechanisms to single-parameter multiunit settings in which each buyer demands only one item and the mechanism can assign the service to at most k of the buyers. For standard sequential posted-price mechanisms, we prove that with the valuation distribution having finite support, no sequential posted-price mechanism can extract a constant fraction of the optimal expected revenue, even with unlimited supply. We extend this result to the case of a continuous valuation distribution when various standard assumptions hold simultaneously (i.e., everywhere-supported, continuous, symmetric, and normalized (conditional) distributions that satisfy regularity , the MHR condition , and affiliation ). In fact, it turns out that the best fraction of the optimal revenue that is extractable by a sequential posted-price mechanism is proportional to the ratio of the highest and lowest possible valuation. We prove that a simple generalization of these mechanisms achieves a better revenue performance; namely, if the sequential posted-price mechanism has for each buyer the option of either proposing an offer or asking the buyer for its valuation, then a Ω (1/max { 1, d }) fraction of the optimal revenue can be extracted, where d denotes the degree of dependence of the valuations, ranging from complete independence ( d =0) to arbitrary dependence ( d = n -1).
A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. … A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. A fair clustering is an instance where membership of points in a cluster is uncorrelated with the coloring of the points. Of particular interest is the case where all colors are equally represented. If we have exactly two colors, Chierrichetti, Kumar, Lattanzi and Vassilvitskii (NIPS 2017) showed that various $k$-clustering objectives admit a constant factor approximation. Since then, a number of follow up work has attempted to extend this result to a multi-color case, though so far, the only known results either result in no-constant factor approximation, apply only to special clustering objectives such as $k$-center, yield bicrititeria approximations, or require $k$ to be constant. In this paper, we present a simple reduction from unconstrained $k$-clustering to fair $k$-clustering for a large range of clustering objectives including $k$-median, $k$-means, and $k$-center. The reduction loses only a constant factor in the approximation guarantee, marking the first true constant factor approximation for many of these problems.
Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold … Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. In this paper, we cast the bilateral trade problem in a regret minimization framework over $T$ rounds of seller/buyer interactions, with no prior knowledge on their private valuations. Our main contribution is a complete characterization of the regret regimes for fixed-price mechanisms with different feedback models and private valuations, using as a benchmark the best fixed-price in hindsight. More precisely, we prove the following tight bounds on the regret: - $\Theta(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms). - $\Theta(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms) and independent seller/buyer valuations with bounded densities. - $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded densities. - $\Theta(T)$ for realistic feedback and independent seller/buyer valuations. - $\Theta(T)$ for the adversarial setting.
We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item's value only if the auction is won. … We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item's value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction's transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder's valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in first-price auctions.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
As freelancing work keeps on growing almost everywhere due to a sharp decrease in communication costs and to the widespread of Internet-based labour marketplaces (e.g., guru.com, feelancer.com, mturk.com, upwork.com), many … As freelancing work keeps on growing almost everywhere due to a sharp decrease in communication costs and to the widespread of Internet-based labour marketplaces (e.g., guru.com, feelancer.com, mturk.com, upwork.com), many researchers and practitioners have started exploring the benefits of outsourcing and crowdsourcing [13, 14, 16, 23, 25, 29]. Since employers often use these platforms to find a group of workers to complete a specific task, researchers have focused their efforts on the study of team formation and matching algorithms and on the design of effective incentive schemes [2, 3, 4, 17]. Nevertheless, just recently, several concerns have been raised on possibly unfair biases introduced through the algorithms used to carry out these selection and matching procedures. For this reason, researchers have started studying the fairness of algorithms related to these online marketplaces [8, 19], looking for intelligent ways to overcome the algorithmic bias that frequently arises. Broadly speaking, the aim is to guarantee that, for example, the process of hiring workers through the use of machine learning and algorithmic data analysis tools does not discriminate, even unintentionally, on grounds of nationality or gender. In this short paper, we define the Fair Team Formation problem in the following way: given an online labour marketplace where each worker possesses one or more skills, and where all workers are divided into two or more not overlapping classes (for examples, men and women), we want to design an algorithm that is able to find a team with all the skills needed to complete a given task, and that has the same number of people from all classes. We provide inapproximability results for the Fair Team Formation problem together with four algorithms for the problem itself. We also tested the effectiveness of our algorithmic solutions by performing experiments using real data from an online labor marketplace.
A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. … A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. A fair clustering is an instance where membership of points in a cluster is uncorrelated with the coloring of the points. Of particular interest is the case where all colors are equally represented. If we have exactly two colors, Chierrichetti, Kumar, Lattanzi and Vassilvitskii (NIPS 2017) showed that various $k$-clustering objectives admit a constant factor approximation. Since then, a number of follow up work has attempted to extend this result to a multi-color case, though so far, the only known results either result in no-constant factor approximation, apply only to special clustering objectives such as $k$-center, yield bicrititeria approximations, or require $k$ to be constant. In this paper, we present a simple reduction from unconstrained $k$-clustering to fair $k$-clustering for a large range of clustering objectives including $k$-median, $k$-means, and $k$-center. The reduction loses only a constant factor in the approximation guarantee, marking the first true constant factor approximation for many of these problems.
The Associazione Medici Diabetologi (AMD) collects and manages one of the largest worldwide-available collections of diabetic patient records, also known as the AMD database. This paper presents the initial results … The Associazione Medici Diabetologi (AMD) collects and manages one of the largest worldwide-available collections of diabetic patient records, also known as the AMD database. This paper presents the initial results of an ongoing project whose focus is the application of Artificial Intelligence and Machine Learning techniques for conceptualizing, cleaning, and analyzing such an important and valuable dataset, with the goal of providing predictive insights to better support diabetologists in their diagnostic and therapeutic choices.
We consider budget constrained combinatorial auctions where bidderihas a private valuev i for each of the items in some setS i ,a gentialso has a budget constraintb i . The … We consider budget constrained combinatorial auctions where bidderihas a private valuev i for each of the items in some setS i ,a gentialso has a budget constraintb i . The value to agent iof a set of itemsRis|R!S i |· v i . Such auctions capture adword auctions, where advertisers o!er a bid for those adwords that (hopefully) reach their target audience, and advertisers also have budgets. It is known that even if all items are identical and all budgets are public it is not possible to be truthful and ecient. Our main result is a novel auction that runs in polynomial time, is incentive compatible, and ensures Pareto-optimality. The auction is incentive compatible with respect to the private valuations,v i ,w hereas the budgets,b i , and the sets of interest, S i , are assumed to be public knowledge. This extends the result of Dobzinski et al. [3, 4] for auctions of multipleidenticalitems and public budgets to single-valuedcombinatorialauctions with public budgets.
Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper … Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers online revenue maximization for a unit capacity server, when jobs are non preemptive, in the Bayesian setting: at each time step, one job arrives, with parameters drawn from an underlying distribution. We design an efficiently computable truthful posted price mechanism, which maximizes revenue in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic. We also show the pricing mechanism is robust to learning the job distribution from samples, where polynomially many samples suffice to obtain near optimal prices.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a $9$-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
One of the most important barriers toward a widespread use of mobile robots in unstructured and human populated work environments is the ability to plan a safe path. In this … One of the most important barriers toward a widespread use of mobile robots in unstructured and human populated work environments is the ability to plan a safe path. In this paper, we propose to delegate this activity to a human operator that walks in front of the robot marking with her/his footsteps the path to be followed. The implementation of this approach requires a high degree of robustness in locating the specific person to be followed (the leader). We propose a three phase approach to fulfil this goal: 1. identification and tracking of the person in the image space, 2. sensor fusion between camera data and laser sensors, 3. point interpolation with continuous curvature curves. The approach is described in the paper and extensively validated with experimental results.
We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single … We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single choice problem [Rubinstein et al., 2020], most existing SSPI results were obtained via an elegant, but inherently lossy, reduction to order-oblivious secretary (OOS) policies [Azar et al., 2014]. Motivated by this discrepancy, we develop an intuitive and versatile greedy-based technique that yields SSPIs directly rather than through the reduction to OOSs. Our results can be seen as generalizing and unifying a number of existing results in the area of prophet and secretary problems. Our algorithms significantly improve on the competitive guarantees for a number of interesting scenarios (including general matching with edge arrivals, bipartite matching with vertex arrivals, and certain matroids), and capture new settings (such as budget additive combinatorial auctions). Complementing our algorithmic results, we also consider mechanism design variants. Finally, we analyze the power and limitations of different SSPI approaches by providing a partial converse to the reduction from SSPI to OOS given by Azar et al.
We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the … We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the case where the online algorithm has only limited access to the value distributions through samples. First, we give a $16$-approximate prophet inequality for matching in general graphs under edge arrivals that uses only a single sample from each value distribution as prior information. Then, for bipartite matching and (one-sided) vertex arrivals, we show an improved bound of $8$ that also uses just a single sample from each distribution. Finally, we show how to turn our $16$-approximate single-sample prophet inequality into a truthful single-sample mechanism for online bipartite matching with vertex arrivals.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations? Our first contribution is a more general impossibility result stating that no meaningful approximation is possible without any prior information, expanding the famous impossibility result of Myerson and Satterthwaite. Our second contribution is that one {\em single sample} (one number per item), arguably a minimum-possible amount of prior information, from each seller distribution is sufficient for a large class of two-sided markets. We prove matching upper and lower bounds on the best approximation that can be obtained with one single sample for subadditive buyers and additive sellers, regardless of computational considerations. Our third contribution is the design of computationally efficient blackbox reductions that turn any one-sided mechanism into a two-sided mechanism with a small loss in the approximation, while using only one single sample from each seller. On the way, our blackbox-type mechanisms deliver several interesting positive results in their own right, often beating even the state of the art that uses full prior information.
We study repeated bilateral trade where an adaptive $\sigma$-smooth adversary generates the valuations of sellers and buyers. We provide a complete characterization of the regret regimes for fixed-price mechanisms under … We study repeated bilateral trade where an adaptive $\sigma$-smooth adversary generates the valuations of sellers and buyers. We provide a complete characterization of the regret regimes for fixed-price mechanisms under different feedback models in the two cases where the learner can post either the same or different prices to buyers and sellers. We begin by showing that the minimax regret after $T$ rounds is of order $\sqrt{T}$ in the full-feedback scenario. Under partial feedback, any algorithm that has to post the same price to buyers and sellers suffers worst-case linear regret. However, when the learner can post two different prices at each round, we design an algorithm enjoying regret of order $T^{3/4}$ ignoring log factors. We prove that this rate is optimal by presenting a surprising $T^{3/4}$ lower bound, which is the main technical contribution of the paper.
We introduce the study of designing allocation mechanisms for fairly allocating indivisible goods in settings with interdependent valuation functions. In our setting, there is a set of goods that needs … We introduce the study of designing allocation mechanisms for fairly allocating indivisible goods in settings with interdependent valuation functions. In our setting, there is a set of goods that needs to be allocated to a set of agents (without disposal). Each agent is given a private signal, and his valuation function depends on the signals of all agents. Without the use of payments, there are strong impossibility results for designing strategyproof allocation mechanisms even in settings without interdependent values. Therefore, we turn to design mechanisms that always admit equilibria that are fair with respect to their true signals, despite their potentially distorted perception. To do so, we first extend the definitions of pure Nash equilibrium and well-studied fairness notions in literature to the interdependent setting. We devise simple allocation mechanisms that always admit a fair equilibrium with respect to the true signals. We complement this result by showing that, even for very simple cases with binary additive interdependent valuation functions, no allocation mechanism that always admits an equilibrium, can guarantee that all equilibria are fair with respect to the true signals.
Motivated by many practical applications, in this paper we study {\em budget feasible mechanisms} where the goal is to procure independent sets from matroids. More specifically, we are given a … Motivated by many practical applications, in this paper we study {\em budget feasible mechanisms} where the goal is to procure independent sets from matroids. More specifically, we are given a matroid $\mathcal{M}=(E,\mathcal{I})$ where each ground (indivisible) element is a selfish agent. The cost of each element (i.e., for selling the item or performing a service) is only known to the element itself. There is a buyer with a budget having additive valuations over the set of elements $E$. The goal is to design an incentive compatible (truthful) budget feasible mechanism which procures an independent set of the matroid under the given budget that yields the largest value possible to the buyer. Our result is a deterministic, polynomial-time, individually rational, truthful and budget feasible mechanism with $4$-approximation to the optimal independent set. Then, we extend our mechanism to the setting of matroid intersections in which the goal is to procure common independent sets from multiple matroids. We show that, given a polynomial time deterministic blackbox that returns $\alpha-$approximation solutions to the matroid intersection problem, there exists a deterministic, polynomial time, individually rational, truthful and budget feasible mechanism with $(3\alpha +1)-$approximation to the optimal common independent set.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations? Our first contribution is a more general impossibility result stating that no meaningful approximation is possible without any prior information, expanding the famous impossibility result of Myerson and Satterthwaite. Our second contribution is that one {\em single sample} (one number per item), arguably a minimum-possible amount of prior information, from each seller distribution is sufficient for a large class of two-sided markets. We prove matching upper and lower bounds on the best approximation that can be obtained with one single sample for subadditive buyers and additive sellers, regardless of computational considerations. Our third contribution is the design of computationally efficient blackbox reductions that turn any one-sided mechanism into a two-sided mechanism with a small loss in the approximation, while using only one single sample from each seller. On the way, our blackbox-type mechanisms deliver several interesting positive results in their own right, often beating even the state of the art that uses full prior information.
We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing … We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over $T$ rounds of bidding with the set of arms given by the set of distinct bidding $m$-tuples, where $m$ is the number of platforms. We modify the algorithm proposed in Badanidiyuru \emph{et al.,} to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret $O\left(OPT \sqrt {\frac{mn}{B} }+ \sqrt{mn OPT}\right)$, where $OPT$ is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is $\tilde{O}\left(m^{1/3} \cdot \min\left\{ B^{2/3}, (m T)^{2/3} \right\} \right)$. When restricted to this special-case, this bound improves over Sankararaman and Slivkins in the regime $OPT \ll T$, as is the case in the particular application at hand. Second, we show an $ \Omega\left (\sqrt {m OPT} \right)$ lower bound for the discrete case and an $\Omega\left( m^{1/3} B^{2/3}\right)$ lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.
In this work we introduce a new class of mechanisms composed of a traditional Generalized Second Price (GSP) auction, and a fair division scheme in order to achieve some desired … In this work we introduce a new class of mechanisms composed of a traditional Generalized Second Price (GSP) auction, and a fair division scheme in order to achieve some desired level of fairness between groups of Bayesian strategic advertisers. We propose two mechanisms, beta-Fair GSP and GSP-EFX, that compose GSP with, respectively, an envy-free up to one item, and an envy-free up to any item fair division scheme. The payments of GSP are adjusted in order to compensate advertisers that suffer a loss of efficiency due the fair division stage. We investigate the strategic learning implications of the deployment of sponsored search auction mechanisms that obey to such fairness criteria. We prove that, for both mechanisms, if bidders play so as to minimize their external regret they are guaranteed to reach an equilibrium with good social welfare. We also prove that the mechanisms are budget balanced, so that the payments charged by the traditional GSP mechanism are a good proxy of the total compensation offered to the advertisers. Finally, we evaluate the quality of the allocations through experiments on real-world data.
We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was … We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was introduced: for every distortion factor $μ$, an $O(μ^2)$-competitive algorithm $\operatorname{ALG}_μ$ which handles distortions up to $μ$ was presented. However, using that result requires one to know the distortion of the input in advance, which is impractical. We present the first \emph{distortion-oblivious} algorithms: algorithms which are competitive for \emph{every} input of \emph{every} distortion, and thus do not require knowledge of the distortion in advance. Moreover, the competitive ratios of our algorithms are $\tilde{O}(μ)$, which is a quadratic improvement over the algorithm from STOC 2021, and is nearly optimal (we show a randomized lower bound of $Ω(μ)$ on competitiveness).
The Pandora's Box Problem, originally formalized by Weitzman in 1979, models selection from set of random, alternative options, when evaluation is costly. This includes, for example, the problem of hiring … The Pandora's Box Problem, originally formalized by Weitzman in 1979, models selection from set of random, alternative options, when evaluation is costly. This includes, for example, the problem of hiring a skilled worker, where only one hire can be made, but the evaluation of each candidate is an expensive procedure. Weitzman showed that the Pandora's Box Problem admits an elegant, simple solution, where the options are considered in decreasing order of reservation value,i.e., the value that reduces to zero the expected marginal gain for opening the box. We study for the first time this problem when order - or precedence - constraints are imposed between the boxes. We show that, despite the difficulty of defining reservation values for the boxes which take into account both in-depth and in-breath exploration of the various options, greedy optimal strategies exist and can be efficiently computed for tree-like order constraints. We also prove that finding approximately optimal adaptive search strategies is NP-hard when certain matroid constraints are used to further restrict the set of boxes which may be opened, or when the order constraints are given as reachability constraints on a DAG. We complement the above result by giving approximate adaptive search strategies based on a connection between optimal adaptive strategies and non-adaptive strategies with bounded adaptivity gap for a carefully relaxed version of the problem.
We study the unsplittable flow on a path problem (UFP) where we are given a path with non-negative edge capacities and tasks, which are characterized by a subpath, a demand, … We study the unsplittable flow on a path problem (UFP) where we are given a path with non-negative edge capacities and tasks, which are characterized by a subpath, a demand, and a profit. The goal is to find the most profitable subset of tasks whose total demand does not violate the edge capacities. This problem naturally arises in many settings such as bandwidth allocation, resource constrained scheduling, and interval packing. A natural task classification defines the size of a task i to be the ratio delta between the demand of i and the minimum capacity of any edge used by i. If all tasks have sufficiently small delta, the problem is already well understood and there is a 1+eps approximation. For the complementary setting---instances whose tasks all have large delta---much remains unknown, and the best known polynomial-time procedure gives only (for any constant delta>0) an approximation ratio of 6+eps. In this paper we present a polynomial time 1+eps approximation for the latter setting. Key to this result is a complex geometrically inspired dynamic program. Here each task is represented as a segment underneath the capacity curve, and we identify a proper maze-like structure so that each passage of the maze is crossed by only O(1) tasks in the computed solution. In combination with the known PTAS for delta-small tasks, our result implies a 2+eps approximation for UFP, improving on the previous best 7+eps approximation [Bonsma et al., FOCS 2011]. We remark that our improved approximation factor matches the best known approximation ratio for the considerably easier special case of uniform edge capacities.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide nontrivial fairness guarantees. Recently, Amanatidis et al. [Amanatidis G, Birmpas G, Fusco F, Lazos P, Leonardi S, Reiffenhäuser R (2023) Allocating indivisible goods to strategic agents: Pure Nash equilibria and fairness. Math. Oper. Res., ePub ahead of print November 30, https://doi.org/10.1287/moor.2022.0058 ] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria, and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents’ true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria, and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents’ true valuation functions. Furthermore, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist. Funding: This work was supported by the Horizon 2020 European Research Council Advanced Grant “AMDROMA: Algorithmic and Mechanism Design Research in Online Markets” [Grant 788893], the Ministero dell’Università e della Ricerca Research project of national interest (PRIN) “ALGADIMAR: Algorithms, Games, and Digital Markets,” the Nederlandse Organisatie voor Wetenschappelijk Onderzoek Veni Project “Algorithmic Fair Division in Dynamic, Socially Constrained Environments” [Grant VI.Veni.192.153], and the National Recovery and Resilience Plan Greece 2.0 funded by the European Union under the NextGenerationEU Program [Grant MIS 5154714].
This paper has been withdrawn by the author. This paper has been withdrawn by the author.
We study envy-free pricing mechanisms in matching markets with $m$ items and $n$ budget constrained buyers. Each buyer is interested in a subset of the items on sale, and she … We study envy-free pricing mechanisms in matching markets with $m$ items and $n$ budget constrained buyers. Each buyer is interested in a subset of the items on sale, and she appraises at some single-value every item in her preference-set. Moreover, each buyer has a budget that constraints the maximum affordable payment, while she aims to obtain as many items as possible of her preference-set. Our goal is to compute an envy-free pricing allocation that maximizes the revenue, i.e., the total payment charged to the buyers. This pricing problem is hard to approximate better than $\Omega({\rm min} \{n,m\}^{1/2-\epsilon})$ for any $\epsilon>0$, unless $P=NP$. The hardness result is due to the presence of the matching constraints given that the simpler multi-unit case can be approximated up to a constant factor of $2$. The goal of this paper is to circumvent the hardness result by restricting ourselves to specific settings of valuations and budgets. Two particularly significant scenarios are: each buyer has a budget that is greater than her single-value valuation, and each buyer has a budget that is lower than her single-value valuation. Surprisingly, in both scenarios we are able to achieve a $1/4$-approximation to the optimal envy-free revenue. The algorithms utilize a novel version of the Ausebel ascending price auction. These results may suggest that, although it is difficult to approximate the optimal revenue in general, ascending price auctions could achieve relatively good revenue in most of the practical settings.
Bilateral trade is a fundamental economic scenario comprising a strategically acting buyer and seller, each holding valuations for the item, drawn from publicly known distributions. A mechanism is supposed to … Bilateral trade is a fundamental economic scenario comprising a strategically acting buyer and seller, each holding valuations for the item, drawn from publicly known distributions. A mechanism is supposed to facilitate trade between these agents, if such trade is beneficial. It was recently shown that the only mechanisms that are simultaneously DSIC, SBB, and ex-post IR, are fixed price mechanisms, i.e., mechanisms that are parametrised by a price p, and trade occurs if and only if the valuation of the buyer is at least p and the valuation of the seller is at most p. The gain from trade is the increase in welfare that results from applying a mechanism; here we study the gain from trade achievable by fixed price mechanisms. We explore this question for both the bilateral trade setting, and a double auction setting where there are multiple buyers and sellers. We first identify a fixed price mechanism that achieves a gain from trade of at least 2/r times the optimum, where r is the probability that the seller's valuation does not exceed the buyer's valuation. This extends a previous result by McAfee. Subsequently, we improve this approximation factor in an asymptotic sense, by showing that a more sophisticated rule for setting the fixed price results in an expected gain from trade within a factor O(log(1/r)) of the optimal gain from trade. This is asymptotically the best approximation factor possible. Lastly, we extend our study of fixed price mechanisms to the double auction setting defined by a set of multiple i.i.d. unit demand buyers, and i.i.d. unit supply sellers. We present a fixed price mechanism that achieves a gain from trade that achieves for all epsilon > 0 a gain from trade of at least (1-epsilon) times the expected optimal gain from trade with probability 1 - 2/e^{#T epsilon^2 /2}, where #T is the expected number of trades resulting from the double auction.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide nontrivial fairness guarantees. Recently, Amanatidis et al. [Amanatidis G, Birmpas G, Fusco F, Lazos P, Leonardi S, Reiffenhäuser R (2023) Allocating indivisible goods to strategic agents: Pure Nash equilibria and fairness. Math. Oper. Res., ePub ahead of print November 30, https://doi.org/10.1287/moor.2022.0058 ] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria, and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents’ true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria, and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents’ true valuation functions. Furthermore, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist. Funding: This work was supported by the Horizon 2020 European Research Council Advanced Grant “AMDROMA: Algorithmic and Mechanism Design Research in Online Markets” [Grant 788893], the Ministero dell’Università e della Ricerca Research project of national interest (PRIN) “ALGADIMAR: Algorithms, Games, and Digital Markets,” the Nederlandse Organisatie voor Wetenschappelijk Onderzoek Veni Project “Algorithmic Fair Division in Dynamic, Socially Constrained Environments” [Grant VI.Veni.192.153], and the National Recovery and Resilience Plan Greece 2.0 funded by the European Union under the NextGenerationEU Program [Grant MIS 5154714].
The online joint replenishment problem (JRP) is a fundamental problem in the area of online problems with delay. Over the last decade, several works have studied generalizations of JRP with … The online joint replenishment problem (JRP) is a fundamental problem in the area of online problems with delay. Over the last decade, several works have studied generalizations of JRP with different cost functions for servicing requests. Most prior works on JRP and its generalizations have focused on the clairvoyant setting. Recently, Touitou [Tou23a] developed a non-clairvoyant framework that provided an $O(\sqrt{n \log n})$ upper bound for a wide class of generalized JRP, where $n$ is the number of request types. We advance the study of non-clairvoyant algorithms by providing a simpler, modular framework that matches the competitive ratio established by Touitou for the same class of generalized JRP. Our key insight is to leverage universal algorithms for Set Cover to approximate arbitrary monotone subadditive functions using a simple class of functions termed \textit{disjoint}. This allows us to reduce the problem to several independent instances of the TCP Acknowledgement problem, for which a simple 2-competitive non-clairvoyant algorithm is known. The modularity of our framework is a major advantage as it allows us to tailor the reduction to specific problems and obtain better competitive ratios. In particular, we obtain tight $O(\sqrt{n})$-competitive algorithms for two significant problems: Multi-Level Aggregation and Weighted Symmetric Subadditive Joint Replenishment. We also show that, in contrast, Touitou's algorithm is $\Omega(\sqrt{n \log n})$-competitive for both of these problems.
We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item's value only if the auction is won. … We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item's value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction's transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder's valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in first-price auctions.
In the classical principal-agent hidden-action model, a principal delegates the execution of a costly task to an agent for which he can choose among actions with different costs and different … In the classical principal-agent hidden-action model, a principal delegates the execution of a costly task to an agent for which he can choose among actions with different costs and different success probabilities to accomplish the task. To incentivize the agent to exert effort, the principal can commit to a contract, which is the amount of payment based on the task's success. A crucial assumption of this model is that the principal can only base the payment on the outcome but not on the agent's chosen action. In this work, we relax the hidden-action assumption and introduce a new model where the principal is allowed to inspect subsets of actions at some cost that depends on the inspected subset. If the principal discovers that the agent did not select the agreed-upon action through the inspection, the principal can withhold payment. This relaxation of the model introduces a broader strategy space for the principal, who now faces a tradeoff between positive incentives (increasing payment) and negative incentives (increasing inspection). We show how to find the best deterministic incentive-compatible inspection scheme for all monotone inspection cost functions. We then turn to randomized inspection schemes and show that one can efficiently find the best randomized incentive-compatible inspection scheme when the inspection cost function is submodular. We complement this result by showing that it is impossible to efficiently find the optimal randomized inspection scheme for the more general case of XOS inspection cost functions.
Constrained maximization of submodular functions poses a central problem in combinatorial optimization. In many realistic scenarios, a number of agents need to maximize multiple submodular objectives over the same ground … Constrained maximization of submodular functions poses a central problem in combinatorial optimization. In many realistic scenarios, a number of agents need to maximize multiple submodular objectives over the same ground set. We study such a setting, where the different solutions must be disjoint, and thus, questions of fairness arise. Inspired from the fair division literature, we suggest a simple round-robin protocol, where agents are allowed to build their solutions one item at a time by taking turns. Unlike what is typical in fair division, however, the prime goal here is to provide a fair algorithmic environment; each agent is allowed to use any algorithm for constructing their respective solutions. We show that just by following simple greedy policies, agents have solid guarantees for both monotone and non-monotone objectives, and for combinatorial constraints as general as $p$-systems (which capture cardinality and matroid intersection constraints). In the monotone case, our results include approximate EF1-type guarantees and their implications in fair division may be of independent interest. Further, although following a greedy policy may not be optimal in general, we show that consistently performing better than that is computationally hard.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
We study fully dynamic online selection problems in an adversarial/stochastic setting that includes Bayesian online selection, prophet inequalities, posted price mechanisms, and stochastic probing problems subject to combinatorial constraints. In … We study fully dynamic online selection problems in an adversarial/stochastic setting that includes Bayesian online selection, prophet inequalities, posted price mechanisms, and stochastic probing problems subject to combinatorial constraints. In the classical ``incremental'' version of the problem, selected elements remain active until the end of the input sequence. On the other hand, in the fully dynamic version of the problem, elements stay active for a limited time interval, and then leave. This models, for example, the online matching of tasks to workers with task/worker-dependent working times, and sequential posted pricing of perishable goods. A successful approach to online selection problems in the adversarial setting is given by the notion of Online Contention Resolution Scheme (OCRS), that uses a priori information to formulate a linear relaxation of the underlying optimization problem, whose optimal fractional solution is rounded online for any adversarial order of the input sequence. Our main contribution is providing a general method for constructing an OCRS for fully dynamic online selection problems. Then, we show how to employ such OCRS to construct no-regret algorithms in a partial information model with semi-bandit feedback and adversarial inputs.
Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold … Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. In this paper, we cast the bilateral trade problem in a regret minimization framework over $T$ rounds of seller/buyer interactions, with no prior knowledge on their private valuations. Our main contribution is a complete characterization of the regret regimes for fixed-price mechanisms with different feedback models and private valuations, using as a benchmark the best fixed-price in hindsight. More precisely, we prove the following tight bounds on the regret: - $\Theta(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms). - $\Theta(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms) and independent seller/buyer valuations with bounded densities. - $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded densities. - $\Theta(T)$ for realistic feedback and independent seller/buyer valuations. - $\Theta(T)$ for the adversarial setting.
We study fully dynamic online selection problems in an adversarial/stochastic setting that includes Bayesian online selection, prophet inequalities, posted price mechanisms, and stochastic probing problems subject to combinatorial constraints. In … We study fully dynamic online selection problems in an adversarial/stochastic setting that includes Bayesian online selection, prophet inequalities, posted price mechanisms, and stochastic probing problems subject to combinatorial constraints. In the classical ``incremental'' version of the problem, selected elements remain active until the end of the input sequence. On the other hand, in the fully dynamic version of the problem, elements stay active for a limited time interval, and then leave. This models, for example, the online matching of tasks to workers with task/worker-dependent working times, and sequential posted pricing of perishable goods. A successful approach to online selection problems in the adversarial setting is given by the notion of Online Contention Resolution Scheme (OCRS), that uses a priori information to formulate a linear relaxation of the underlying optimization problem, whose optimal fractional solution is rounded online for any adversarial order of the input sequence. Our main contribution is providing a general method for constructing an OCRS for fully dynamic online selection problems. Then, we show how to employ such OCRS to construct no-regret algorithms in a partial information model with semi-bandit feedback and adversarial inputs.
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in … Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
We study repeated bilateral trade where an adaptive $\sigma$-smooth adversary generates the valuations of sellers and buyers. We provide a complete characterization of the regret regimes for fixed-price mechanisms under … We study repeated bilateral trade where an adaptive $\sigma$-smooth adversary generates the valuations of sellers and buyers. We provide a complete characterization of the regret regimes for fixed-price mechanisms under different feedback models in the two cases where the learner can post either the same or different prices to buyers and sellers. We begin by showing that the minimax regret after $T$ rounds is of order $\sqrt{T}$ in the full-feedback scenario. Under partial feedback, any algorithm that has to post the same price to buyers and sellers suffers worst-case linear regret. However, when the learner can post two different prices at each round, we design an algorithm enjoying regret of order $T^{3/4}$ ignoring log factors. We prove that this rate is optimal by presenting a surprising $T^{3/4}$ lower bound, which is the main technical contribution of the paper.
We introduce the study of designing allocation mechanisms for fairly allocating indivisible goods in settings with interdependent valuation functions. In our setting, there is a set of goods that needs … We introduce the study of designing allocation mechanisms for fairly allocating indivisible goods in settings with interdependent valuation functions. In our setting, there is a set of goods that needs to be allocated to a set of agents (without disposal). Each agent is given a private signal, and his valuation function depends on the signals of all agents. Without the use of payments, there are strong impossibility results for designing strategyproof allocation mechanisms even in settings without interdependent values. Therefore, we turn to design mechanisms that always admit equilibria that are fair with respect to their true signals, despite their potentially distorted perception. To do so, we first extend the definitions of pure Nash equilibrium and well-studied fairness notions in literature to the interdependent setting. We devise simple allocation mechanisms that always admit a fair equilibrium with respect to the true signals. We complement this result by showing that, even for very simple cases with binary additive interdependent valuation functions, no allocation mechanism that always admits an equilibrium, can guarantee that all equilibria are fair with respect to the true signals.
We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item's value only if the auction is won. … We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item's value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction's \emph{transparency}, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder's valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in first-price auctions.
In this work we introduce a new class of mechanisms composed of a traditional Generalized Second Price (GSP) auction, and a fair division scheme in order to achieve some desired … In this work we introduce a new class of mechanisms composed of a traditional Generalized Second Price (GSP) auction, and a fair division scheme in order to achieve some desired level of fairness between groups of Bayesian strategic advertisers. We propose two mechanisms, beta-Fair GSP and GSP-EFX, that compose GSP with, respectively, an envy-free up to one item, and an envy-free up to any item fair division scheme. The payments of GSP are adjusted in order to compensate advertisers that suffer a loss of efficiency due the fair division stage. We investigate the strategic learning implications of the deployment of sponsored search auction mechanisms that obey to such fairness criteria. We prove that, for both mechanisms, if bidders play so as to minimize their external regret they are guaranteed to reach an equilibrium with good social welfare. We also prove that the mechanisms are budget balanced, so that the payments charged by the traditional GSP mechanism are a good proxy of the total compensation offered to the advertisers. Finally, we evaluate the quality of the allocations through experiments on real-world data.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83-approximation and runs in O(n log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms. The versatility of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a (9 + ε)-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
Efficient and truthful mechanisms to price resources on servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers … Efficient and truthful mechanisms to price resources on servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent/job arrives at every time step, with parameters drawn from the underlying distribution. We design a posted-price mechanism which can be efficiently computed and is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. We also prove that the proposed pricing strategy is robust to imprecise knowledge of the job distribution and that a distribution learned from polynomially many samples is sufficient to obtain a near-optimal truthful pricing strategy.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Single-Sample Prophet Inequalities via Greedy-Ordered SelectionConstantine Caramanis, Paul Dütting, Matthew Faw, Federico Fusco, Philip … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Single-Sample Prophet Inequalities via Greedy-Ordered SelectionConstantine Caramanis, Paul Dütting, Matthew Faw, Federico Fusco, Philip Lazos, Stefano Leonardi, Orestis Papadigenopoulos, Emmanouil Pountourakis, and Rebecca ReiffenhäuserConstantine Caramanis, Paul Dütting, Matthew Faw, Federico Fusco, Philip Lazos, Stefano Leonardi, Orestis Papadigenopoulos, Emmanouil Pountourakis, and Rebecca Reiffenhäuserpp.1298 - 1325Chapter DOI:https://doi.org/10.1137/1.9781611977073.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single choice problem [Rubinstein et al., 2020], most existing SSPI results were obtained via an elegant, but inherently lossy reduction to order-oblivious secretary (OOS) policies [Azar et al., 2014]. Motivated by this discrepancy, we develop an intuitive and versatile greedy-based technique that yields SSPIs directly rather than through the reduction to OOSs. Our results can be seen as generalizing and unifying a number of existing results in the area of prophet and secretary problems. Our algorithms significantly improve on the competitive guarantees for a number of interesting scenarios (including general matching with edge arrivals, bipartite matching with vertex arrivals, and certain matroids), and capture new settings (such as budget additive combinatorial auctions). Complementing our algorithmic results, we also consider mechanism design variants. Finally, we analyze the power and limitations of different SSPI approaches by providing a partial converse to the reduction from SSPI to OOS given by Azar et al. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Distortion-Oblivious Algorithms for Minimizing Flow TimeYossi Azar, Stefano Leonardi, and Noam TouitouYossi Azar, Stefano … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Distortion-Oblivious Algorithms for Minimizing Flow TimeYossi Azar, Stefano Leonardi, and Noam TouitouYossi Azar, Stefano Leonardi, and Noam Touitoupp.252 - 274Chapter DOI:https://doi.org/10.1137/1.9781611977073.13PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was introduced: for every distortion factor μ, an O(μ2)-competitive algorithm ALGμ which handles distortions up to μ was presented. However, using that result requires one to know the distortion of the input in advance, which is impractical. We present the first distortion-oblivious algorithms: algorithms which are competitive for every input of every distortion, and thus do not require knowledge of the distortion in advance. Moreover, the competitive ratios of our algorithms are Õ(μ), which is a quadratic improvement over the algorithm from STOC 2021, and is nearly optimal (we show a randomized lower bound of Ω(μ) on competitiveness). Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We consider prophet inequalities under downward-closed constraints. In this problem, a decision-maker makes immediate and irrevocable choices on arriving elements, subject to constraints. Traditionally, performance is compared to the expected … We consider prophet inequalities under downward-closed constraints. In this problem, a decision-maker makes immediate and irrevocable choices on arriving elements, subject to constraints. Traditionally, performance is compared to the expected offline optimum, called the \textit{Ratio of Expectations} (RoE). However, RoE has limitations as it only guarantees the average performance compared to the optimum, and might perform poorly against the realized ex-post optimal value. We study an alternative performance measure, the \textit{Expected Ratio} (EoR), namely the expectation of the ratio between algorithm's and prophet's value. EoR offers robust guarantees, e.g., a constant EoR implies achieving a constant fraction of the offline optimum with constant probability. For the special case of single-choice problems the EoR coincides with the well-studied notion of probability of selecting the maximum. However, the EoR naturally generalizes the probability of selecting the maximum for combinatorial constraints, which are the main focus of this paper. Specifically, we establish two reductions: for every constraint, RoE and the EoR are at most a constant factor apart. Additionally, we show that the EoR is a stronger benchmark than the RoE in that, for every instance (constraint and distribution), the RoE is at least a constant fraction of the EoR, but not vice versa. Both these reductions imply a wealth of EoR results in multiple settings where RoE results are known.
We study truthful mechanisms for welfare maximization in online bipartite matching. In our (multi-parameter) setting, every buyer is associated with a (possibly private) desired set of items, and has a … We study truthful mechanisms for welfare maximization in online bipartite matching. In our (multi-parameter) setting, every buyer is associated with a (possibly private) desired set of items, and has a private value for being assigned an item in her desired set. Unlike most online matching settings, where agents arrive online, in our setting the items arrive online in an adversarial order while the buyers are present for the entire duration of the process. This poses a significant challenge to the design of truthful mechanisms, due to the ability of buyers to strategize over future rounds. We provide an almost full picture of the competitive ratios in different scenarios, including myopic vs. non-myopic agents, tardy vs. prompt payments, and private vs. public desired sets. Among other results, we identify the frontier for which the celebrated $e/(e-1)$ competitive ratio for the vertex-weighted online matching of Karp, Vazirani and Vazirani extends to truthful agents and online items.
The Associazione Medici Diabetologi (AMD) collects and manages one of the largest worldwide-available collections of diabetic patient records, also known as the AMD database. This paper presents the initial results … The Associazione Medici Diabetologi (AMD) collects and manages one of the largest worldwide-available collections of diabetic patient records, also known as the AMD database. This paper presents the initial results of an ongoing project whose focus is the application of Artificial Intelligence and Machine Learning techniques for conceptualizing, cleaning, and analyzing such an important and valuable dataset, with the goal of providing predictive insights to better support diabetologists in their diagnostic and therapeutic choices.
We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was … We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was introduced: for every distortion factor $\mu$, an $O(\mu^2)$-competitive algorithm $\operatorname{ALG}_{\mu}$ which handles distortions up to $\mu$ was presented. However, using that result requires one to know the distortion of the input in advance, which is impractical. We present the first \emph{distortion-oblivious} algorithms: algorithms which are competitive for \emph{every} input of \emph{every} distortion, and thus do not require knowledge of the distortion in advance. Moreover, the competitive ratios of our algorithms are $\tilde{O}(\mu)$, which is a quadratic improvement over the algorithm from STOC 2021, and is nearly optimal (we show a randomized lower bound of $\Omega(\mu)$ on competitiveness).
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive … The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive complexity}, capturing the number of sequential rounds of parallel computation needed. In this work we obtain the first \emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with \emph{near-optimal} $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first \emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms' applicability on real-world datasets.
Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold … Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. Despite the simplicity of this problem, a classical result by Myerson and Satterthwaite (1983) affirms the impossibility of designing a mechanism which is simultaneously efficient, incentive compatible, individually rational, and budget balanced. This impossibility result fostered an intense investigation of meaningful trade-offs between these desired properties. Much work has focused on approximately efficient fixed-price mechanisms, i.e., Blumrosen and Dobzinski (2014; 2016), Colini-Baldeschi et al. (2016), which have been shown to fully characterize strong budget balanced and ex-post individually rational direct revelation mechanisms. All these results, however, either assume some knowledge on the priors of the seller/buyer valuations, or a black box access to some samples of the distributions, as in D{\"u}tting et al. (2021). In this paper, we cast for the first time the bilateral trade problem in a regret minimization framework over rounds of seller/buyer interactions, with no prior knowledge on the private seller/buyer valuations. Our main contribution is a complete characterization of the regret regimes for fixed-price mechanisms with different models of feedback and private valuations, using as benchmark the best fixed price in hindsight. More precisely, we prove the following bounds on the regret: $\bullet$ $\widetilde{\Theta}(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms); $\bullet$ $\widetilde{\Theta}(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms) and independent seller/buyer valuations with bounded densities; $\bullet$ $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded densities; $\bullet$ $\Theta(T)$ for realistic feedback and independent seller/buyer valuations; $\bullet$ $\Theta(T)$ for the adversarial setting.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations?
We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) … We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) all require exact knowledge of the processing time of each job. This assumption is crucial, as even a slight perturbation of the processing time would lead to polynomial competitive ratio. However, this assumption very rarely holds in real-life scenarios.
We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing … We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over T rounds of bidding with the set of arms given by the set of distinct bidding m-tuples, where m is the number of platforms. We modify the algorithm proposed in Badanidiyuru et al., [11] to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret , where OPT is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is . When restricted to this special-case, this bound improves over Sankararaman and Slivkins [34] in the regime OPT < < T, as is the case in the particular application at hand. Second, we show an lower bound for the discrete case and an Ω(m1/3B2/3) lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.
We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the … We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the case where the online algorithm has only limited access to the value distributions through samples. First, we give a $16$-approximate prophet inequality for matching in general graphs under edge arrivals that uses only a single sample from each value distribution as prior information. Then, for bipartite matching and (one-sided) vertex arrivals, we show an improved bound of $8$ that also uses just a single sample from each distribution. Finally, we show how to turn our $16$-approximate single-sample prophet inequality into a truthful single-sample mechanism for online bipartite matching with vertex arrivals.
We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing … We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over $T$ rounds of bidding with the set of arms given by the set of distinct bidding $m$-tuples, where $m$ is the number of platforms. We modify the algorithm proposed in Badanidiyuru \emph{et al.,} to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret $O\left(OPT \sqrt {\frac{mn}{B} }+ \sqrt{mn OPT}\right)$, where $OPT$ is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is $\tilde{O}\left(m^{1/3} \cdot \min\left\{ B^{2/3}, (m T)^{2/3} \right\} \right)$. When restricted to this special-case, this bound improves over Sankararaman and Slivkins in the regime $OPT \ll T$, as is the case in the particular application at hand. Second, we show an $ \Omega\left (\sqrt {m OPT} \right)$ lower bound for the discrete case and an $\Omega\left( m^{1/3} B^{2/3}\right)$ lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.
We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) … We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) all require exact knowledge of the processing time of each job. This assumption is crucial, as even a slight perturbation of the processing time would lead to polynomial competitive ratio. However, this assumption very rarely holds in real-life scenarios. In this paper, we present the first algorithm for weighted flow time which do not require exact knowledge of the processing times of jobs. Specifically, we introduce the Scheduling with Predicted Processing Time (SPPT) problem, where the algorithm is given a prediction for the processing time of each job, instead of its real processing time. For the case of a constant factor distortion between the predictions and the real processing time, our algorithms match all the best known competitiveness bounds for weighted flow time -- namely $O(\log P), O(\log D)$ and $O(\log W)$, where $P,D,W$ are the maximum ratios of processing times, densities, and weights, respectively. For larger errors, the competitiveness of our algorithms degrades gracefully.
One of the most important barriers toward a widespread use of mobile robots in unstructured and human populated work environments is the ability to plan a safe path. In this … One of the most important barriers toward a widespread use of mobile robots in unstructured and human populated work environments is the ability to plan a safe path. In this paper, we propose to delegate this activity to a human operator that walks in front of the robot marking with her/his footsteps the path to be followed. The implementation of this approach requires a high degree of robustness in locating the specific person to be followed (the leader). We propose a three phase approach to fulfil this goal: 1. identification and tracking of the person in the image space, 2. sensor fusion between camera data and laser sensors, 3. point interpolation with continuous curvature curves. The approach is described in the paper and extensively validated with experimental results.
We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single … We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single choice problem [Rubinstein et al., 2020], most existing SSPI results were obtained via an elegant, but inherently lossy, reduction to order-oblivious secretary (OOS) policies [Azar et al., 2014]. Motivated by this discrepancy, we develop an intuitive and versatile greedy-based technique that yields SSPIs directly rather than through the reduction to OOSs. Our results can be seen as generalizing and unifying a number of existing results in the area of prophet and secretary problems. Our algorithms significantly improve on the competitive guarantees for a number of interesting scenarios (including general matching with edge arrivals, bipartite matching with vertex arrivals, and certain matroids), and capture new settings (such as budget additive combinatorial auctions). Complementing our algorithmic results, we also consider mechanism design variants. Finally, we analyze the power and limitations of different SSPI approaches by providing a partial converse to the reduction from SSPI to OOS given by Azar et al.
We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation functions. We assume no monetary transfers and, therefore, a … We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation functions. We assume no monetary transfers and, therefore, a mechanism in our setting is an algorithm that takes as input the reported -- rather than the true -- values of the agents. Our main goal is to explore whether there exist mechanisms that have pure Nash equilibria for every instance and, at the same time, provide fairness guarantees for the allocations that correspond to these equilibria. We focus on two relaxations of envy-freeness, namely envy-freeness up to one good (EF1), and envy-freeness up to any good (EFX), and we positively answer the above question. In particular, we study two algorithms that are known to produce such allocations in the non-strategic setting: Round-Robin (EF1 allocations for any number of agents) and a cut-and-choose algorithm of Plaut and Roughgarden [SIAM Journal of Discrete Mathematics, 2020] (EFX allocations for two agents). For Round-Robin we show that all of its pure Nash equilibria induce allocations that are EF1 with respect to the underlying true values, while for the algorithm of Plaut and Roughgarden we show that the corresponding allocations not only are EFX but also satisfy maximin share fairness, something that is not true for this algorithm in the non-strategic setting! Further, we show that a weaker version of the latter result holds for any mechanism for two agents that always has pure Nash equilibria which all induce EFX allocations.
We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was … We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was introduced: for every distortion factor $μ$, an $O(μ^2)$-competitive algorithm $\operatorname{ALG}_μ$ which handles distortions up to $μ$ was presented. However, using that result requires one to know the distortion of the input in advance, which is impractical. We present the first \emph{distortion-oblivious} algorithms: algorithms which are competitive for \emph{every} input of \emph{every} distortion, and thus do not require knowledge of the distortion in advance. Moreover, the competitive ratios of our algorithms are $\tilde{O}(μ)$, which is a quadratic improvement over the algorithm from STOC 2021, and is nearly optimal (we show a randomized lower bound of $Ω(μ)$ on competitiveness).
We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the … We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the case where the online algorithm has only limited access to the value distributions through samples. First, we give a $16$-approximate prophet inequality for matching in general graphs under edge arrivals that uses only a single sample from each value distribution as prior information. Then, for bipartite matching and (one-sided) vertex arrivals, we show an improved bound of $8$ that also uses just a single sample from each distribution. Finally, we show how to turn our $16$-approximate single-sample prophet inequality into a truthful single-sample mechanism for online bipartite matching with vertex arrivals.
We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing … We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over $T$ rounds of bidding with the set of arms given by the set of distinct bidding $m$-tuples, where $m$ is the number of platforms. We modify the algorithm proposed in Badanidiyuru \emph{et al.,} to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret $O\left(OPT \sqrt {\frac{mn}{B} }+ \sqrt{mn OPT}\right)$, where $OPT$ is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is $\tilde{O}\left(m^{1/3} \cdot \min\left\{ B^{2/3}, (m T)^{2/3} \right\} \right)$. When restricted to this special-case, this bound improves over Sankararaman and Slivkins in the regime $OPT \ll T$, as is the case in the particular application at hand. Second, we show an $ \Omega\left (\sqrt {m OPT} \right)$ lower bound for the discrete case and an $\Omega\left( m^{1/3} B^{2/3}\right)$ lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.
We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) … We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) all require exact knowledge of the processing time of each job. This assumption is crucial, as even a slight perturbation of the processing time would lead to polynomial competitive ratio. However, this assumption very rarely holds in real-life scenarios. In this paper, we present the first algorithm for weighted flow time which do not require exact knowledge of the processing times of jobs. Specifically, we introduce the Scheduling with Predicted Processing Time (SPPT) problem, where the algorithm is given a prediction for the processing time of each job, instead of its real processing time. For the case of a constant factor distortion between the predictions and the real processing time, our algorithms match all the best known competitiveness bounds for weighted flow time -- namely $O(\log P), O(\log D)$ and $O(\log W)$, where $P,D,W$ are the maximum ratios of processing times, densities, and weights, respectively. For larger errors, the competitiveness of our algorithms degrades gracefully.
In this work we introduce a new class of mechanisms composed of a traditional Generalized Second Price (GSP) auction and a fair division scheme, in order to achieve some desired … In this work we introduce a new class of mechanisms composed of a traditional Generalized Second Price (GSP) auction and a fair division scheme, in order to achieve some desired level of fairness between groups of Bayesian strategic advertisers. We propose two mechanisms, $\beta$-Fair GSP and GSP-EFX, that compose GSP with, respectively, an envy-free up to one item, and an envy-free up to any item fair division scheme. The payments of GSP are adjusted in order to compensate advertisers that suffer a loss of efficiency due the fair division stage. We investigate the strategic learning implications of the deployment of sponsored search auction mechanisms that obey to such fairness criteria. We prove that, for both mechanisms, if bidders play so as to minimize their external regret they are guaranteed to reach an equilibrium with good social welfare. We also prove that the mechanisms are budget balanced, so that the payments charged by the traditional GSP mechanism are a good proxy of the total compensation offered to the advertisers. Finally, we evaluate the quality of the allocations through experiments on real-world data.
Submodular maximization is a classic algorithmic problem with multiple applications in data mining and machine learning; there, the growing need to deal with massive instances motivates the design of algorithms … Submodular maximization is a classic algorithmic problem with multiple applications in data mining and machine learning; there, the growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the adaptive complexity, which captures the number of sequential rounds of parallel computation needed by an algorithm to terminate. In this work we obtain the first constant factor approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with near-optimal $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: a crucial feature to account for is represented by the total number of function evaluations (or value queries). Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first combinatorial approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a $9$-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper … Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers online revenue maximization for a unit capacity server, when jobs are non preemptive, in the Bayesian setting: at each time step, one job arrives, with parameters drawn from an underlying distribution. We design an efficiently computable truthful posted price mechanism, which maximizes revenue in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic. We also show the pricing mechanism is robust to learning the job distribution from samples, where polynomially many samples suffice to obtain near optimal prices.
Incentive compatibility (IC) is a desirable property for any auction mechanism, including those used in online advertising. However, in real world applications practical constraints and complex environments often result in … Incentive compatibility (IC) is a desirable property for any auction mechanism, including those used in online advertising. However, in real world applications practical constraints and complex environments often result in mechanisms that lack incentive compatibility. Recently, several papers investigated the problem of deploying black-box statistical tests to determine if an auction mechanism is incentive compatible by using the notion of IC-Regret that measures the regret of a truthful bidder. Unfortunately, most of those methods are computationally intensive, since they require the execution of many counterfactual experiments.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations? Our first contribution is a more general impossibility result stating that no meaningful approximation is possible without any prior information, expanding the famous impossibility result of Myerson and Satterthwaite. Our second contribution is that one {\em single sample} (one number per item), arguably a minimum-possible amount of prior information, from each seller distribution is sufficient for a large class of two-sided markets. We prove matching upper and lower bounds on the best approximation that can be obtained with one single sample for subadditive buyers and additive sellers, regardless of computational considerations. Our third contribution is the design of computationally efficient blackbox reductions that turn any one-sided mechanism into a two-sided mechanism with a small loss in the approximation, while using only one single sample from each seller. On the way, our blackbox-type mechanisms deliver several interesting positive results in their own right, often beating even the state of the art that uses full prior information.
We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of … We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of items, and the aim of a mechanism is to improve the social welfare by arranging purchases and sales of the items. A mechanism is given prior distributions on the agents’ valuations of the items, but not the actual valuations; thus, the aim is to maximise the expected social welfare over these distributions. As in previous work, we are interested in the worst-case ratio between the social welfare achieved by a truthful mechanism and the best social welfare possible. Our main result is an incentive compatible and budget balanced constant-factor approximation mechanism in a setting where buyers have XOS valuations and sellers’ valuations are additive. This is the first such approximation mechanism for a two-sided market setting where the agents have combinatorial valuation functions. To achieve this result, we introduce a more general kind of demand query that seems to be needed in this situation. In the simpler case that sellers have unit supply (each having just one item to sell), we give a new mechanism whose welfare guarantee improves on a recent one in the literature. We also introduce a more demanding version of the strong budget balance (SBB) criterion, aimed at ruling out certain “unnatural” transactions satisfied by SBB. We show that the stronger version is satisfied by our mechanisms.
A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. … A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. A fair clustering is an instance where membership of points in a cluster is uncorrelated with the coloring of the points. Of particular interest is the case where all colors are equally represented. If we have exactly two colors, Chierrichetti, Kumar, Lattanzi and Vassilvitskii (NIPS 2017) showed that various $k$-clustering objectives admit a constant factor approximation. Since then, a number of follow up work has attempted to extend this result to a multi-color case, though so far, the only known results either result in no-constant factor approximation, apply only to special clustering objectives such as $k$-center, yield bicrititeria approximations, or require $k$ to be constant. In this paper, we present a simple reduction from unconstrained $k$-clustering to fair $k$-clustering for a large range of clustering objectives including $k$-median, $k$-means, and $k$-center. The reduction loses only a constant factor in the approximation guarantee, marking the first true constant factor approximation for many of these problems.
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can … Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations? Our first contribution is a more general impossibility result stating that no meaningful approximation is possible without any prior information, expanding the famous impossibility result of Myerson and Satterthwaite. Our second contribution is that one {\em single sample} (one number per item), arguably a minimum-possible amount of prior information, from each seller distribution is sufficient for a large class of two-sided markets. We prove matching upper and lower bounds on the best approximation that can be obtained with one single sample for subadditive buyers and additive sellers, regardless of computational considerations. Our third contribution is the design of computationally efficient blackbox reductions that turn any one-sided mechanism into a two-sided mechanism with a small loss in the approximation, while using only one single sample from each seller. On the way, our blackbox-type mechanisms deliver several interesting positive results in their own right, often beating even the state of the art that uses full prior information.
A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. … A fair clustering instance is given a data set $A$ in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. A fair clustering is an instance where membership of points in a cluster is uncorrelated with the coloring of the points. Of particular interest is the case where all colors are equally represented. If we have exactly two colors, Chierrichetti, Kumar, Lattanzi and Vassilvitskii (NIPS 2017) showed that various $k$-clustering objectives admit a constant factor approximation. Since then, a number of follow up work has attempted to extend this result to a multi-color case, though so far, the only known results either result in no-constant factor approximation, apply only to special clustering objectives such as $k$-center, yield bicrititeria approximations, or require $k$ to be constant. In this paper, we present a simple reduction from unconstrained $k$-clustering to fair $k$-clustering for a large range of clustering objectives including $k$-median, $k$-means, and $k$-center. The reduction loses only a constant factor in the approximation guarantee, marking the first true constant factor approximation for many of these problems.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent … Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is … In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
We design simple mechanisms to approximate the Gains from Trade (GFT) in two-sided markets with multiple unit-supply sellers and multiple unit-demand buyers. A classical impossibility result by Myerson and Satterthwaite … We design simple mechanisms to approximate the Gains from Trade (GFT) in two-sided markets with multiple unit-supply sellers and multiple unit-demand buyers. A classical impossibility result by Myerson and Satterthwaite showed that even with only one seller and one buyer, no Bayesian Incentive Compatible (BIC), Individually Rational (IR), and Budget-Balanced (BB) mechanism can achieve full GFT (trade whenever buyer's value is higher than the seller's cost). The same paper also proposed the ``second-best'' mechanism that maximizes the GFT subject to BIC, IR, and BB constraints, which is unfortunately rather complex for even the single-seller single-buyer case. Our mechanism is simple, BIC, IR, and BB and achieves 1/2 of the optimal GFT among all BIC, IR, and BB mechanisms. The result holds for arbitrary distributions of the buyers' and sellers' values and can accommodate any downward-closed feasibility constraints over the allocations. The analysis of our mechanism is facilitated by extending the Cai-Weinberg-Devanur duality framework to two-sided markets.
We consider reallocation problems in settings where the initial endowment of each agent consists of a subset of the resources. The private information of the players is their value for … We consider reallocation problems in settings where the initial endowment of each agent consists of a subset of the resources. The private information of the players is their value for every possible subset of the resources. The goal is to redistribute resources among agents to maximize efficiency. Monetary transfers are allowed, but participation is voluntary.
A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to … A celebrated impossibility result by Myerson and Satterthwaite (1983) shows that any truthful mechanism for two-sided markets that maximizes social welfare must run a deficit, resulting in a necessity to relax welfare efficiency and the use of approximation mechanisms. Such mechanisms in general make extensive use of the Bayesian priors. In this work, we investigate a question of increasing theoretical and practical importance: how much prior information is required to design mechanisms with near-optimal approximations?
We study the bilateral trade problem: one seller, one buyer and a single, indivisible item for sale. It is well known that there is no fully-efficient and incentive compatible mechanism … We study the bilateral trade problem: one seller, one buyer and a single, indivisible item for sale. It is well known that there is no fully-efficient and incentive compatible mechanism for this problem that maintains a balanced budget. We design simple and robust mechanisms that obtain approximate efficiency with these properties. We show that even minimal use of statistical data can yield good approximation results. Finally, we demonstrate how a mechanism for this simple bilateral-trade problem can be used as a black-box for constructing mechanisms in more general environments.
The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do … The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location, and marketing over social networks. In this paper we consider the … Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location, and marketing over social networks. In this paper we consider the problem of maximizing any submodular function subject to d knapsack constraints, where d is a fixed constant. We establish a strong relation between the discrete problem and its continuous relaxation, obtained through extension by expectation of the submodular function. Formally, we show that, for any nonnegative submodular function, an α-approximation algorithm for the continuous relaxation implies a randomized (α − ε)-approximation algorithm for the discrete problem. We use this relation to obtain an (e −1 − ε)-approximation for the problem, and a nearly optimal (1 − e −1 − ε)-approximation ratio for the monotone case, for any ε &gt; 0. We further show that the probabilistic domain defined by a continuous solution can be reduced to yield a polynomial-size domain, given an oracle for the extension by expectation. This leads to a deterministic version of our technique.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in … We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC).We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity.
We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a … We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm.
We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid … We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular, when $f$ may be a nonmonotone function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over a downward-closed polytope $P$ described by an efficient separation oracle. Previously this was known only for monotone functions. For nonmonotone functions, a constant factor was known only when the polytope was either the intersection of a fixed number of knapsack constraints or a matroid polytope. Second, we show that contention resolution schemes are an effective way to round a fractional solution, even when $f$ is nonmonotone. In particular, contention resolution schemes for different polytopes can be combined to handle the intersection of different constraints. Via linear programming duality we show that a contention resolution scheme for a constraint is related to the correlation gap of weighted rank functions of the constraint. This leads to an optimal contention resolution scheme for the matroid polytope. Our results provide a broadly applicable framework for maximizing linear and submodular functions subject to independence constraints. We give several illustrative examples. Contention resolution schemes may find other applications.
In this work we study the problem of using machine-learned predictions to improve the performance of online algorithms. We consider two classical problems, ski rental and non-clairvoyant job scheduling, and … In this work we study the problem of using machine-learned predictions to improve the performance of online algorithms. We consider two classical problems, ski rental and non-clairvoyant job scheduling, and obtain new online algorithms that use predictions to make their decisions. These algorithms are oblivious to the performance of the predictor, improve with better predictions, but do not degrade much if the predictions are poor.
We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin … We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share , which is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focused on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations, a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee. Furthermore, we initiate the study of approximate maximin fair division under submodular valuations . Specifically, we show that when the valuations of the agents are nonnegative , monotone , and submodular, then a 0.21-approximate maximin fair allocation is guaranteed to exist. In fact, we show that such an allocation can be efficiently found by using a simple round-robin algorithm. A technical contribution of the article is to analyze the performance of this combinatorial algorithm by employing the concept of multilinear extensions .
The need for real time analysis of rapidly producing data streams (e.g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information … The need for real time analysis of rapidly producing data streams (e.g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information from massive data "on the fly." Such problems can often be reduced to maximizing a submodular set function subject to various constraints. While efficient streaming methods have been recently developed for monotone submodular maximization, in a wide range of applications, such as video summarization, the underlying utility function is non-monotone, and there are often various constraints imposed on the optimization problem to consider privacy or personalization. We develop the first efficient single pass streaming algorithm, Streaming Local Search, that for any streaming monotone submodular maximization algorithm with approximation guarantee α under a collection of independence systems I, provides a constant 1/(1+2/√α+1/α+2d(1+√α)) approximation guarantee for maximizing a non-monotone submodular function under the intersection of I and d knapsack constraints. Our experiments show that for video summarization, our method runs more than 1700 times faster than previous work, while maintaining practically the same performance.
Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for … Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint.In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.
This paper develops a general approach, rooted in statistical learning theory, to learning an approximately revenue-maximizing auction from data. We introduce $t$-level auctions to interpolate between simple auctions, such as … This paper develops a general approach, rooted in statistical learning theory, to learning an approximately revenue-maximizing auction from data. We introduce $t$-level auctions to interpolate between simple auctions, such as welfare maximization with reserve prices, and optimal auctions, thereby balancing the competing demands of expressivity and simplicity. We prove that such auctions have small representation error, in the sense that for every product distribution $F$ over bidders' valuations, there exists a $t$-level auction with small $t$ and expected revenue close to optimal. We show that the set of $t$-level auctions has modest pseudo-dimension (for polynomial $t$) and therefore leads to small learning error. One consequence of our results is that, in arbitrary single-parameter settings, one can learn a mechanism with expected revenue arbitrarily close to optimal from a polynomial number of samples.
In this paper, we consider a very general model for exploration-exploitation tradeoff which allows arbitrary concave rewards and convex constraints on the decisions across time, in addition to the customary … In this paper, we consider a very general model for exploration-exploitation tradeoff which allows arbitrary concave rewards and convex constraints on the decisions across time, in addition to the customary limitation on the time horizon. This model subsumes the classic multi-armed bandit (MAB) model, and the Bandits with Knapsacks (BwK) model of Badanidiyuru et al.[2013]. We also consider an extension of this model to allow linear contexts, similar to the linear contextual extension of the MAB model. We demonstrate that a natural and simple extension of the UCB family of algorithms for MAB provides a polynomial time algorithm that has near-optimal regret guarantees for this substantially more general model, and matches the bounds provided by Badanidiyuru et al.[2013] for the special case of BwK, which is quite surprising. We also provide computationally more efficient algorithms by establishing interesting connections between this problem and other well studied problems/algorithms such as the Blackwell approachability problem, online convex optimization, and the Frank-Wolfe technique for convex optimization.
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is … We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is drawn independently from an a-priori known probability distribution. Under edge arrival, the weight of each edge is revealed upon arrival, and the algorithm decides whether to include it in the matching or not. Under vertex arrival, the weights of all edges from the newly arriving vertex to all previously arrived vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. To study these settings, we introduce a novel unified framework of batched prophet inequalities that captures online settings where elements arrive in batches; in particular it captures matching under the two aforementioned arrival models. Our algorithms rely on the construction of suitable online contention resolution schemes (OCRS). We first extend the framework of OCRS to batched-OCRS, we then establish a reduction from batched prophet inequality to batched OCRS, and finally we construct batched OCRSs with selectable ratios of 0.337 and 0.5 for edge and vertex arrival models, respectively. Both results improve the state of the art for the corresponding settings. For vertex arrival, our result is tight. Interestingly, pricing-based prophet inequalities with comparable competitive ratios are unknown.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider … The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
We study online mechanisms for preemptive scheduling with deadlines, with the goal of maximizing the total value of completed jobs. This problem is fundamental to deadline-aware cloud scheduling, but there … We study online mechanisms for preemptive scheduling with deadlines, with the goal of maximizing the total value of completed jobs. This problem is fundamental to deadline-aware cloud scheduling, but there are strong lower bounds even for the algorithmic problem without incentive constraints. However, these lower bounds can be circumvented under the natural assumption of deadline slackness, i.e., that there is a guaranteed lower bound s > 1 on the ratio between a job's size and the time window in which it can be executed. In this paper, we construct a truthful scheduling mechanism with a constant competitive ratio, given slackness s > 1. Furthermore, we show that if s is large enough then we can construct a mechanism that also satisfies a commitment property: it can be determined whether or not a job will finish, and the requisite payment if so, well in advance of each job's deadline. This is notable because, in practice, users with strict deadlines may find it unacceptable to discover only very close to their deadline that their job has been rejected.
We study the question of fair clustering under the {\em disparate impact} doctrine, where each protected class must have approximately equal representation in every cluster. We formulate the fair clustering … We study the question of fair clustering under the {\em disparate impact} doctrine, where each protected class must have approximately equal representation in every cluster. We formulate the fair clustering problem under both the k-center and the k-median objectives, and show that even with two protected classes the problem is challenging, as the optimum solution can violate common conventions---for instance a point may no longer be assigned to its nearest cluster center! En route we introduce the concept of fairlets, which are minimal sets that satisfy fair representation while approximately preserving the clustering objective. We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms. While finding good fairlets can be NP-hard, we proceed to obtain efficient approximation algorithms based on minimum cost flow. We empirically demonstrate the \emph{price of fairness} by quantifying the value of fair clustering on real-world datasets with sensitive attributes.
An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], … An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "no-envy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.
What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes … What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget … Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
It is known that greedy methods perform well for maximizing monotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we … It is known that greedy methods perform well for maximizing monotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we show - arguably, surprisingly - that invoking the classical greedy algorithm $O(\sqrt{k})$-times leads to the (currently) fastest deterministic algorithm, called Repeated Greedy, for maximizing a general submodular function subject to $k$-independent system constraints. Repeated Greedy achieves $(1 + O(1/\sqrt{k}))k$ approximation using $O(nr\sqrt{k})$ function evaluations (here, $n$ and $r$ denote the size of the ground set and the maximum size of a feasible solution, respectively). We then show that by a careful sampling procedure, we can run the greedy algorithm only once and obtain the (currently) fastest randomized algorithm, called Sample Greedy, for maximizing a submodular function subject to $k$-extendible system constraints (a subclass of $k$-independent system constrains). Sample Greedy achieves $(k + 3)$-approximation with only $O(nr/k)$ function evaluations. Finally, we derive an almost matching lower bound, and show that no polynomial time algorithm can have an approximation ratio smaller than $ k + 1/2 - \varepsilon$. To further support our theoretical results, we compare the performance of Repeated Greedy and Sample Greedy with prior art in a concrete application (movie recommendation). We consistently observe that while Sample Greedy achieves practically the same utility as the best baseline, it performs at least two orders of magnitude faster.
In this paper we investigate the problem of measuring end-to-end Incentive Compatibility (IC) regret given black-box access to an auction mechanism. Our goal is to 1) compute an estimate for … In this paper we investigate the problem of measuring end-to-end Incentive Compatibility (IC) regret given black-box access to an auction mechanism. Our goal is to 1) compute an estimate for IC regret in an auction, 2) provide a measure of certainty around the estimate of IC regret, and 3) minimize the time it takes to arrive at an accurate estimate. We consider two main problems, with different informational assumptions: In the advertiser problem the goal is to measure IC regret for some known valuation v, while in the more general demand-side platform (DSP) problem we wish to determine the worst-case IC regret over all possible valuations. The problems are naturally phrased in an online learning model and we design algorithms for both problems. We give an online learning algorithm where for the advertiser problem the error of determining IC shrinks as (where B is the finite set of bids, T is the number of time steps, and n is number of auctions per time step), and for the DSP problem it shrinks as . For the DSP problem, we also consider stronger IC regret estimation and extend our algorithm to achieve better IC regret error. We validate the theoretical results using simulations with Generalized Second Price (GSP) auctions, which are known to not be incentive compatible and thus have strictly positive IC regret.
During the last decade, the matroid secretary problem (MSP) became one of the most prominent classes of online selection problems. The interest in MSP is twofold: on the one hand, … During the last decade, the matroid secretary problem (MSP) became one of the most prominent classes of online selection problems. The interest in MSP is twofold: on the one hand, there are many interesting applications of MSP, and on the other hand, there is strong hope that MSP admits $O(1)$-competitive algorithms, which is the claim of the well-known matroid secretary conjecture. Partially linked to its numerous applications in online auctions, substantial interest arose also in the study of nonlinear versions of MSP, with a focus on the submodular MSP (SMSP). The fact that submodularity captures the property of diminishing returns, a very natural property for valuation functions, is a key reason for the interest in SMSP. So far, $O(1)$-competitive algorithms have been obtained for SMSP over some basic matroid classes. This created some hope that, analogously to the matroid secretary conjecture, one may even obtain $O(1)$-competitive algorithms for SMSP over any matroid. However, up to now, most questions related to SMSP remained open, including whether SMSP may be substantially more difficult than MSP and, more generally, to what extent MSP and, SMSP are related. Our goal is to address these points by presenting general black-box reductions from SMSP to MSP. In particular, we show that any $O(1)$-competitive algorithm for MSP, even restricted to a particular matroid class, can be transformed in a black-box way to an $O(1)$-competitive algorithm for SMSP over the same matroid class. This implies that the matroid secretary conjecture is equivalent to the same conjecture for SMSP. Hence, in this sense SMSP is not harder than MSP. Also, to find $O(1)$-competitive algorithms for SMSP over a particular matroid class, it suffices to consider MSP over the same matroid class. Using our reductions we obtain many first and improved $O(1)$-competitive algorithms for SMSP over various matroid classes by leveraging known algorithms for MSP. Moreover, our reductions imply an $O(\log\log({rank}))$-competitive algorithm for SMSP, thus, matching the currently best asymptotic algorithm for MSP, and substantially improving on the previously best $O(\log({rank}))$-competitive algorithm for SMSP.
Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks … Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth.
Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold … Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. Despite the simplicity of this problem, a classical result by Myerson and Satterthwaite (1983) affirms the impossibility of designing a mechanism which is simultaneously efficient, incentive compatible, individually rational, and budget balanced. This impossibility result fostered an intense investigation of meaningful trade-offs between these desired properties. Much work has focused on approximately efficient fixed-price mechanisms, i.e., Blumrosen and Dobzinski (2014; 2016), Colini-Baldeschi et al. (2016), which have been shown to fully characterize strong budget balanced and ex-post individually rational direct revelation mechanisms. All these results, however, either assume some knowledge on the priors of the seller/buyer valuations, or a black box access to some samples of the distributions, as in D{\"u}tting et al. (2021). In this paper, we cast for the first time the bilateral trade problem in a regret minimization framework over rounds of seller/buyer interactions, with no prior knowledge on the private seller/buyer valuations. Our main contribution is a complete characterization of the regret regimes for fixed-price mechanisms with different models of feedback and private valuations, using as benchmark the best fixed price in hindsight. More precisely, we prove the following bounds on the regret: $\bullet$ $\widetilde{\Theta}(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms); $\bullet$ $\widetilde{\Theta}(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms) and independent seller/buyer valuations with bounded densities; $\bullet$ $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded densities; $\bullet$ $\Theta(T)$ for realistic feedback and independent seller/buyer valuations; $\bullet$ $\Theta(T)$ for the adversarial setting.
We study mechanisms for an allocation of goods among agents, where agents have no incentive to lie about their true values (incentive compatible) and for which no agent will seek … We study mechanisms for an allocation of goods among agents, where agents have no incentive to lie about their true values (incentive compatible) and for which no agent will seek to exchange outcomes with another (envy-free). Mechanisms satisfying each requirement separately have been studied extensively, but there are few results on mechanisms achieving both. We are interested in those allocations for which there exist payments such that the resulting mechanism is simultaneously incentive compatible and envy-free. Cyclic monotonicity is a characterization of incentive compatible allocations, local efficiency is a characterization for envy-free allocations. We combine the above to give a characterization for allocations which are both incentive compatible and envy free. We show that even for allocations that allow payments leading to incentive compatible mechanisms, and other payments leading to envy free mechanisms, there may not exist any payments for which the mechanism is simultaneously incentive compatible and envy-free. The characterization that we give lets us compute the set of Pareto-optimal mechanisms that trade off envy freeness for incentive compatibility.
We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures … We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures all of the most prominent examples of "simple" auctions, including anonymous and non-anonymous item and bundle pricings, with either a single or multiple buyers. The technique we propose is to break the analysis of auctions into two natural pieces. First, one shows that the set of allocation rules have large amounts of structure; second, fixing an allocation on a sample, one shows that the set of auctions agreeing with this allocation on that sample have revenue functions with low dimensionality. Our results effectively imply that whenever it's possible to compute a near-optimal simple auction with a known prior, it is also possible to compute such an auction with an unknown prior (given a polynomial number of samples).
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there … In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
We study the paradigmatic fair division problem of fairly allocating a divisible good among agents with heterogeneous preferences, commonly known as cake cutting. Classic cake cutting protocols are susceptible to … We study the paradigmatic fair division problem of fairly allocating a divisible good among agents with heterogeneous preferences, commonly known as cake cutting. Classic cake cutting protocols are susceptible to manipulation. Do their strategic outcomes still guarantee fairness? To address this question we adopt a novel algorithmic approach, proposing a concrete computational model and reasoning about the game-theoretic properties of algorithms that operate in this model. Specifically, we show that each protocol in the class of generalized cut and choose (GCC) protocols --- which includes the most important discrete cake cutting protocols --- is guaranteed to have approximate subgame perfect Nash equilibria, or even exact equilibria if the protocol's tie-breaking rule is flexible. We further observe that the (approximate) equilibria of proportional protocols --- which guarantee each of the n agents a 1/n-fraction of the cake --- must be (approximately) proportional, thereby answering the above question in the positive (at least for one common notion of fairness).
For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to … For revenue and welfare maximization in single-dimensional Bayesian settings, Chawla et al. (STOC10) recently showed that sequential posted-price mechanisms (SPMs), though simple in form, can perform surprisingly well compared to the optimal mechanisms. In this paper, we give a theoretical explanation of this fact, based on a connection to the notion of correlation gap. Loosely speaking, for auction environments with matroid constraints, we can relate the performance of a mechanism to the expectation of a monotone submodular function over a random set. This random set corresponds to the winner set for the optimal mechanism, which is highly correlated, and corresponds to certain demand set for SPMs, which is independent. The notion of correlation gap of Agrawal et al.\ (SODA10) quantifies how much we {}"lose" in the expectation of the function by ignoring correlation in the random set, and hence bounds our loss in using certain SPM instead of the optimal mechanism. Furthermore, the correlation gap of a monotone and submodular function is known to be small, and it follows that certain SPM can approximate the optimal mechanism by a good constant factor. Exploiting this connection, we give tight analysis of a greedy-based SPM of Chawla et al.\ for several environments. In particular, we show that it gives an $e/(e-1)$-approximation for matroid environments, gives asymptotically a $1/(1-1/\sqrt{2\pi k})$-approximation for the important sub-case of $k$-unit auctions, and gives a $(p+1)$-approximation for environments with $p$-independent set system constraints.
We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that … We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that guarantees a non-trivial approximation ratio of O(log^2 m) [STOC'06], where m is the number of items. This was subsequently improved to O( log m log log m) [Dobzinski, APPROX'07] and then to O(m) [Krysta and Vocking, ICALP'12]. In this paper we develop the first mechanism that breaks the logarithmic barrier. Specifically, the mechanism provides an approximation ratio of O( m). Similarly to previous constructions, our mechanism uses polynomially many value and demand queries, and in fact provides the same approximation ratio for the larger class of XOS (a.k.a. fractionally subadditive) valuations. We also develop a computationally efficient implementation of the mechanism for combinatorial auctions with budget additive bidders. Although in general computing a demand query is NP-hard for budget additive valuations, we observe that the specific form of demand queries that our mechanism uses can be efficiently computed when bidders are budget additive.
We consider the design of mechanisms for multi-sided exchanges that interact with strategic players-some of which have multi-dimension strategic spaces or are represented by mediators. Players act to optimize their … We consider the design of mechanisms for multi-sided exchanges that interact with strategic players-some of which have multi-dimension strategic spaces or are represented by mediators. Players act to optimize their own utilities. The mechanism designer, on the other hand, aims to optimize a social goal, i.e., the gain from trade. As the mediators control the information flow from their players to the mechanism, the mechanism faces strategic behavior not only from the players but also from mediators: a mediator acts strategically to maximize utility on behalf of the players he represents. In particular, we focus on one example of the above setting which is motivated by the foreseeable future form of online advertising. Online advertising currently supports some of the most important Internet services, including: search, social media and user generated content sites. To overcome privacy concerns, it has been suggested to introduce user information markets through information brokers into the online advertising ecosystem. Such markets give users control over which data get shared in the online advertising exchange. We describe a model for the above foreseeable future form of online advertising, and design two mechanisms for the exchange of this model: a deterministic mechanism which is related to the vast literature on mechanism design through trade reduction and allows players with a multi-dimensional strategic space, and a randomized mechanism which can handle a more general version of the model.
The study of mechanisms for multi-sided markets has received an increasingly growing attention from the research community, and is motivated by the numerous examples of such markets on the web … The study of mechanisms for multi-sided markets has received an increasingly growing attention from the research community, and is motivated by the numerous examples of such markets on the web and in electronic commerce. Many of these examples represent dynamic and uncertain environments, and thus, require, in fact, online mechanisms. Unfortunately, as far as we know, no previously published online mechanism for a multi-sided market (or even for a double-sided market) has managed to (approximately) maximize the gain from trade, while guaranteeing desirable economic properties such as incentivizing truthfulness, voluntary participation and avoiding budget deficit. In this work we present the first online mechanism for a multi-sided market which has the above properties. Our mechanism is designed for a market setting suggested by [Feldman and Gonen (2016)]; which is motivated by the foreseeable future form of online advertising. The online nature of our setting motivated us to define a stronger notion of individual rationality, called "continuous individual rationality", capturing the natural requirement that a player should never lose either by participating in the mechanism or by not leaving prematurely. Satisfying the requirements of continuous individual rationality, together with the other economic properties our mechanism guarantees, requires the mechanism to use a novel pricing scheme where users may be paid ongoing increments during the mechanism's execution up to a pre-known maximum value. As users rarely ever get paid in reality, this pricing scheme is new to mechanism design. Nevertheless, the principle it is based on can be observed in many common real life scenarios such as executive compensation payments and company acquisition deals. We believe both our new dynamic pricing scheme concept and our strengthened notion of individual rationality are of independent interest.