Flows over time provide a natural and convenient description for the dynamics of a continuous stream of particles traveling from a source to a sink in a network, allowing to …
Flows over time provide a natural and convenient description for the dynamics of a continuous stream of particles traveling from a source to a sink in a network, allowing to track the progress of each infinitesimal particle along time. A basic model for the propagation of flow is the so-called fluid queue model in which the time to traverse an edge is composed of a flow-dependent waiting time in a queue at the entrance of the edge plus a constant travel time after leaving the queue. In a dynamic network routing game each infinitesimal particle is interpreted as a player that seeks to complete its journey in the least possible time. Players are forward looking and anticipate the congestion and queuing delays induced by others upon arrival to any edge in the network. Equilibrium occurs when each particle travels along a shortest path. This paper is concerned with the study of equilibria in the fluid queue model and provides a constructive proof of existence and uniqueness of equilibria in single origin-destination networks with piecewise constant inflow rate. This is done through a detailed analysis of the underlying static flows obtained as derivatives of a dynamic equilibrium. Furthermore, for multicommodity networks, we give a general nonconstructive proof of existence of equilibria when the inflow rates belong to L p .
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that …
We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain these approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given fixed assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed.
Public transit systems in major urban areas usually operate under deficits and therefore require significant subsidies. An important cause of this deficit, particularly in the developing world, is the high …
Public transit systems in major urban areas usually operate under deficits and therefore require significant subsidies. An important cause of this deficit, particularly in the developing world, is the high fare evasion rate mainly due to an ineffective control policy or the lack of it. In this paper we study new models for optimizing fare inspection strategies in transit networks based on bilevel programming. In the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times. To model the followers’ behavior we study both a nonadaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximation algorithms, and we prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and nonadaptive strategies. For the leader’s optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator’s will. For the latter variant, we design an LP-based approximation algorithm. Finally, employing a local search procedure that shifts inspection probabilities within an initially determined support set, we perform an extensive computational study for all variants of the problem on instances of the Dutch railway and the Amsterdam subway network. This study reveals that our solutions are within 5% of theoretical upper bounds drawn from the LP relaxation. We also derive exact nonlinear programming formulations for all variants of the leader’s problem and use them to obtain exact solutions for small instance sizes. The e-companion is available at https://doi.org/10.1287/opre.2016.1560 .
After a sequence of improvements Boyd et al. [TSP on cubic and subcubic graphs, Integer Programming and Combinatorial Optimization, Lecture Notes in Comput. Sci. 6655, Springer, Heidelberg, 2011, pp. 65--77] …
After a sequence of improvements Boyd et al. [TSP on cubic and subcubic graphs, Integer Programming and Combinatorial Optimization, Lecture Notes in Comput. Sci. 6655, Springer, Heidelberg, 2011, pp. 65--77] proved that any 2-connected graph whose $n$ vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4/3)n, establishing in particular that the integrality gap of the subtour LP is at most 4/3 for cubic 2-connected graphs and matching the conjectured value of the famous 4/3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4/3-1/61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4/3. With the previous result, and by considering an even smaller $\epsilon$, we show that the integrality gap of the TSP relaxation is at most $4/3- \epsilon$ even if the graph is not 2-connected (i.e., for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4/3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4/3 - 1/18)n.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see …
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 − 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multi-threshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is …
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. Here, the optimal algorithm picks the maximum value with probability at least 1/e. However, by varying the available information, new interesting problems arise. For instance, in the full information variant of the secretary problem the values are i.i.d. samples from a known distribution. Naturally, the best possible success probability increases and turns out to be approximately 0.58. Also, the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature.In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability p. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with p-sampling (ROSp for short) and the adversarial order secretary problem with p-sampling (AOSp for short). Our main result is to obtain best possible algorithms for both problems and all values of p. As p grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of p only. Therefore, even knowledge of the total number of elements is unnecessary. Proving that this algorithm is optimal involves a novel technique, which boils down to analyzing a related game in a conflict graph over binary sequences. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements. Surprisingly, this sequence of time thresholds arises from a separable and convex optimization problem whose solution is independent of p.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see …
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 – 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multithreshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)The Two-Sided Game of Googol and Sample-Based Prophet InequalitiesJosé R. Correa, Andrés Cristi, Boris Epstein, …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)The Two-Sided Game of Googol and Sample-Based Prophet InequalitiesJosé R. Correa, Andrés Cristi, Boris Epstein, and José A. SotoJosé R. Correa, Andrés Cristi, Boris Epstein, and José A. Sotopp.2066 - 2081Chapter DOI:https://doi.org/10.1137/1.9781611975994.127PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 – 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially …
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially gets to sample each of N items independently with probability p, and can observe the relative rankings of these sampled items. Then, the DM faces the remaining items in an online fashion, observing the relative rankings of all revealed items. While scanning the sequence the DM makes irrevocable stop/continue decisions and her reward for stopping the sequence facing the item with rank i is Y i . The goal of the DM is to maximize her reward. We start by studying the case in which the values Y i are known to the DM, and then move to the case in which these values are adversarial. For the former case we are able to recover several classic results in the area, thus giving a unifying framework for single selection optimal stopping. For the latter, we pin down the optimal algorithm, obtaining the optimal competitive ratios for all values of p. Funding: This work was partially supported by The Center for Mathematical Modeling at the University of Chile (ANID FB210005), Grant Anillo Information and Computation in Market Design (ANID ACT210005), FONDECYT 1220054 and 1181180, and a Meta Research PhD Fellowship.
A fluid queuing network constitutes one of the simplest models in which to study flow dynamics over a network. In this model we have a single source-sink pair and each …
A fluid queuing network constitutes one of the simplest models in which to study flow dynamics over a network. In this model we have a single source-sink pair and each link has a per-time-unit capacity and a transit time. A dynamic equilibrium (or equilibrium flow over time) is a flow pattern over time such that no flow particle has incentives to unilaterally change its path. Although the model has been around for almost fifty years, only recently results regarding existence and characterization of equilibria have been obtained. In particular the long term behavior remains poorly understood. Our main result in this paper is to show that, under a natural (and obviously necessary) condition on the queuing capacity, a dynamic equilibrium reaches a steady state (after which queue lengths remain constant) in finite time. Previously, it was not even known that queue lengths would remain bounded. The proof is based on the analysis of a rather non-obvious potential function that turns out to be monotone along the evolution of the equilibrium. Furthermore, we show that the steady state is characterized as an optimal solution of a certain linear program. When this program has a unique solution, which occurs generically, the long term behavior is completely predictable. On the contrary, if the linear program has multiple solutions the steady state is more difficult to identify as it depends on the whole temporal evolution of the equilibrium.
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies …
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies in transit networks based on bilevel programming. In the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times. To model the followers' behavior we study both a non-adaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximation algorithms and we prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and non-adaptive strategies. For the leader's optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator's will. For the latter variant, we design an LP based approximation algorithm. Finally, using a local search procedure that shifts inspection probabilities within an initially determined support set, we perform an extensive computational study for all variants of the problem on instances of the Dutch railway and the Amsterdam subway network. This study reveals that our solutions are within 95% of theoretical upper bounds drawn from the LP relaxation.
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially …
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially gets to sample each of $N$ items independently with probability $p$, and can observe the relative rankings of these sampled items. Then, the DM faces the remaining items in an online fashion, observing the relative rankings of all revealed items. While scanning the sequence the DM makes irrevocable stop/continue decisions and her reward for stopping the sequence facing the item with rank $i$ is $Y_i$. The goal of the DM is to maximize her reward. We start by studying the case in which the values $Y_i$ are known to the DM, and then move to the case in which these values are adversarial. For the former case, we write the natural linear program that captures the performance of an algorithm, and take its continuous limit. We prove a structural result about this continuous limit, which allows us to reduce the problem to a relatively simple real optimization problem. We establish that the optimal algorithm is given by a sequence of thresholds $t_1\le t_2\le\cdots$ such that the DM should stop if seeing an item with current ranking $i$ after time $t_i$. Additionally we are able to recover several classic results in the area such as those for secretary problem and the minimum ranking problem. For the adversarial case, we obtain a similar linear program with an additional stochastic dominance constraint. Using the same machinery we are able to pin down the optimal competitive ratios for all values of $p$. Notably, we prove that as $p$ approaches 1, our guarantee converges linearly to 0.745, matching that of the i.i.d.~prophet inequality. Also interesting is the case $p=1/2$, where our bound evaluates to $0.671$, which improves upon the state of the art.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017].
The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who sequentially observes random variables X1, . . . , Xn and …
A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who sequentially observes random variables X1, . . . , Xn and selects one of them is at least an α fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional βn samples for some β ≥ 0. We first give improved lower bounds on α for a wide range of values of β; specifically, α ≥ (1 + β)/e when β ≤ 1/(e − 1), which is tight, and α ≥ 0.648 when β = 1, which improves on a bound of around 0.635 due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of 1/e for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- …
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- tion since the sixties, when Wegner conjectured that the corresponding duality gap, i.e., the maximum possible ratio between the maximum independent set and the minimum hitting set (MHS), is bounded by a universal constant. An interesting special case, that may prove use- ful to tackling the general problem, is the diagonal-intersecting case, in which the given family of rectangles is intersected by a diagonal. Indeed, Chepoi and Felsner recently gave a factor 6 approximation algorithm for MHS in this setting, and showed that the duality gap is between 3/2 and 6. In this paper we improve upon these results. First we show that MIS in diagonal-intersecting families is NP-complete, providing one smallest subclass for which MIS is provably hard. Then, we derive an $O(n^2)$-time algorithm for the maximum weight independent set when, in addition the rectangles intersect below the diagonal. This improves and extends a classic result of Lubiw, and amounts to obtain a 2-approximation algo- rithm for the maximum weight independent set of rectangles intersecting a diagonal. Finally, we prove that for diagonal-intersecting families the duality gap is between 2 and 4. The upper bound, which implies an approximation algorithm of the same factor, follows from a simple com- binatorial argument, while the lower bound represents the best known lower bound on the duality gap, even in the general case.
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an …
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an $α$ fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional $βn$ samples for some $β\geq 0$. We first give improved lower bounds on $α$ for a wide range of values of $β$; specifically, $α\geq(1+β)/e$ when $β\leq 1/(e-1)$, which is tight, and $α\geq 0.648$ when $β=1$, which improves on a bound of around $0.635$ due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of $1/e$ for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
The secretary problem is probably the most well-studied optimal stopping problem with many applications in economics and management. In the secretary problem, a decision maker faces an unknown sequence of …
The secretary problem is probably the most well-studied optimal stopping problem with many applications in economics and management. In the secretary problem, a decision maker faces an unknown sequence of values, revealed successively, and has to make irrevocable take-it-or-leave-it decisions. Her goal is to select the maximum value in the sequence. Although in the classic secretary problem the values of upcoming elements are entirely unknown, in many realistic situations, the decision maker has access to some information, for example, from past data. In this paper, we take a sampling approach and assume that before starting the sequence, each element is sampled independently with probability p. We study both the adversarial and the random arrival models. Our main result is to obtain the best possible algorithms for both settings and all values of p. As p grows to one, the obtained guarantees converge to the optimal guarantees in the full information case. Notably, we establish that the best possible algorithm in the adversarial order setting is a fixed threshold algorithm. In the random order setting, we characterize the best possible algorithm by a sequence of thresholds, dictating at which point in time we should accept a value. Surprisingly, this sequence is independent of p. We complement our theoretical results with numerical experiments on data of people playing the secretary problem repeatedly. Our results help explain some behavioral issues they raised and indicate that people play a strategy similar to our optimal algorithms from the start onwards, albeit slightly suboptimally. This paper was accepted by Chung Piaw Teo, optimization. Funding: This work was partially funded by the Agencia Nacional de Investigación y Desarrollo Chile [Grants FB210005 (Centro de Modelamiento Matamático), AIM23_0004 (Millenium Institute for Research in Market Imperfections and Public Policy), AFB230002 (Instituto Sistemas Complejos de Ingeniería), ACT210005, and Fondo Nacional de Desarrollo Científico y Tecnológico 1220054], the Alexander von Humboldt Foundation with funds from the German Federal Ministry of Education and Research, and the German Research Foundation within the Research Training Group Advanced Optimization in a Networked Economy [Grant Graduiertenkolleg 2201]. The views expressed in this paper do not necessarily reflect those of the European Central Bank or the Eurosystem. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2021.01580 .
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a …
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4/3)n, establishing in particular that the integrality gap of the subtour LP is at most 4/3 for cubic 2-connected graphs and matching the conjectured value of the famous 4/3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4/3 - 1/61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4/3. With the previous result, and by considering an even smaller epsilon, we show that the integrality gap of the TSP relaxation is at most 4/3 - epsilon, even if the graph is not 2-connected (i.e. for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4/3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4/3 - 1/18)n.
This paper continues the study of equilibria for flows over time in the fluid queueing model recently considered by Koch and Skutella [10]. We provide a constructive proof for the …
This paper continues the study of equilibria for flows over time in the fluid queueing model recently considered by Koch and Skutella [10]. We provide a constructive proof for the existence and uniqueness of equilibria in the case of a single origin-destination with piecewise constant inflow rates, through a detailed analysis of the static flows obtained as derivatives of a dynamic equilibrium. We also give a nonconstructive existence proof of equilibria when the inflow rates belong to $L^p$ including the extension to multiple origin-destinations.
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- …
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- tion since the sixties, when Wegner conjectured that the corresponding duality gap, i.e., the maximum possible ratio between the maximum independent set and the minimum hitting set (MHS), is bounded by a universal constant. An interesting special case, that may prove use- ful to tackling the general problem, is the diagonal-intersecting case, in which the given family of rectangles is intersected by a diagonal. Indeed, Chepoi and Felsner recently gave a factor 6 approximation algorithm for MHS in this setting, and showed that the duality gap is between 3/2 and 6. In this paper we improve upon these results. First we show that MIS in diagonal-intersecting families is NP-complete, providing one smallest subclass for which MIS is provably hard. Then, we derive an $O(n^2)$-time algorithm for the maximum weight independent set when, in addition the rectangles intersect below the diagonal. This improves and extends a classic result of Lubiw, and amounts to obtain a 2-approximation algo- rithm for the maximum weight independent set of rectangles intersecting a diagonal. Finally, we prove that for diagonal-intersecting families the duality gap is between 2 and 4. The upper bound, which implies an approximation algorithm of the same factor, follows from a simple com- binatorial argument, while the lower bound represents the best known lower bound on the duality gap, even in the general case.
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local …
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. With this method we are able to prove the following results.
First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed.
We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of 2.618 for the weighted sum of completion times, and an approximation ratio of 2.5 in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but randomized policy. We show that it achieves an approximation ratio of at most 2.13; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to \pi /2.
Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling that achieves a factor of 2+ \epsilon for any \epsilon > 0.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given $n$ cards with arbitrary non-negative numbers written on both sides. The cards are randomly placed on $n$ consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card.
We study algorithms for two natural objectives. In the first one, as in the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least $0.45292$. In the second one, similar to the prophet inequality, the player maximizes the expectation of the selected hidden value. We show a guarantee of at least $0.63518$ with respect to the expected maximum hidden value.
Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial $n$ visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently $n$ visible numbers in the table. And the third one is similar to the latter but it additionally requires that the last flipped value is larger than the value on the other side of its card.
We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. Our guarantee improves upon $1-1/e$ for this problem, which is the currently best known guarantee and only works for the i.i.d. case.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017]. The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is …
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. However, by varying the available information, new interesting problems arise. Also the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature. In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability $p$. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with $p$-sampling (ROS$p$ for short) and the adversarial order secretary problem with $p$-sampling (AOS$p$ for short). Our main result is to obtain best possible algorithms for both problems and all values of $p$. As $p$ grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of $p$ only. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given $n$ cards with arbitrary non-negative numbers written on both sides. The cards are randomly placed on $n$ consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, as in the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least $0.45292$. In the second one, similar to the prophet inequality, the player maximizes the expectation of the selected hidden value. We show a guarantee of at least $0.63518$ with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial $n$ visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently $n$ visible numbers in the table. And the third one is similar to the latter but it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. Our guarantee improves upon $1-1/e$ for this problem, which is the currently best known guarantee and only works for the i.i.d. case.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017]. The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick …
In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Gairing (1977) established that this worst case ratio is a universal constant equal to 1/2. In the last decade prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A very interesting variant is the Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms achieve a constant of 1-1/e and very recently this barrier was slightly improved. This paper analyzes strategies that set a nonincreasing sequence of thresholds to be applied at different times. The gambler stops the first time a sample surpasses the corresponding threshold. Specifically we consider a class of strategies called blind quantile strategies. They consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that they can achieve a constant of 0.665, improving upon the best known result of Azar et al. (2018), and on Beyhaghi et al. (2018) (order selection). Our proof analyzes precisely the underlying stopping time distribution, relying on Schur-convexity theory. We further prove that blind strategies cannot achieve better than 0.675. Finally we prove that no algorithm for the gambler can achieve better than 0.732.
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- …
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- tion since the sixties, when Wegner conjectured that the corresponding duality gap, i.e., the maximum possible ratio between the maximum independent set and the minimum hitting set (MHS), is bounded by a universal constant. An interesting special case, that may prove use- ful to tackling the general problem, is the diagonal-intersecting case, in which the given family of rectangles is intersected by a diagonal. Indeed, Chepoi and Felsner recently gave a factor 6 approximation algorithm for MHS in this setting, and showed that the duality gap is between 3/2 and 6. In this paper we improve upon these results. First we show that MIS in diagonal-intersecting families is NP-complete, providing one smallest subclass for which MIS is provably hard. Then, we derive an $O(n^2)$-time algorithm for the maximum weight independent set when, in addition the rectangles intersect below the diagonal. This improves and extends a classic result of Lubiw, and amounts to obtain a 2-approximation algo- rithm for the maximum weight independent set of rectangles intersecting a diagonal. Finally, we prove that for diagonal-intersecting families the duality gap is between 2 and 4. The upper bound, which implies an approximation algorithm of the same factor, follows from a simple com- binatorial argument, while the lower bound represents the best known lower bound on the duality gap, even in the general case.
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a …
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4/3)n, establishing in particular that the integrality gap of the subtour LP is at most 4/3 for cubic 2-connected graphs and matching the conjectured value of the famous 4/3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4/3 - 1/61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4/3. With the previous result, and by considering an even smaller epsilon, we show that the integrality gap of the TSP relaxation is at most 4/3 - epsilon, even if the graph is not 2-connected (i.e. for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4/3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4/3 - 1/18)n.
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local …
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. With this method we are able to prove the following results. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed. We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of 2.618 for the weighted sum of completion times, and an approximation ratio of 2.5 in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but randomized policy. We show that it achieves an approximation ratio of at most 2.13; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to \pi /2. Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling that achieves a factor of 2+ \epsilon for any \epsilon > 0.
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies …
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies in transit networks based on bilevel programming. In the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times. To model the followers' behavior we study both a non-adaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximation algorithms and we prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and non-adaptive strategies. For the leader's optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator's will. For the latter variant, we design an LP based approximation algorithm. Finally, using a local search procedure that shifts inspection probabilities within an initially determined support set, we perform an extensive computational study for all variants of the problem on instances of the Dutch railway and the Amsterdam subway network. This study reveals that our solutions are within 95% of theoretical upper bounds drawn from the LP relaxation.
In this work we initiate the study of buy-and-sell prophet inequalities. We start by considering what is arguably the most fundamental setting. In this setting the online algorithm observes a …
In this work we initiate the study of buy-and-sell prophet inequalities. We start by considering what is arguably the most fundamental setting. In this setting the online algorithm observes a sequence of prices one after the other. At each time step, the online algorithm can decide to buy and pay the current price if it does not hold the item already; or it can decide to sell and collect the current price as a reward if it holds the item. We show that for i.i.d. prices a single-threshold online algorithm achieves at least $1/2$ of the expected profit of the optimal offline algorithm and we prove that this is optimal. For non-i.i.d. prices in random order, where prices are no longer independent, we give a single-threshold online algorithm that achieves at least a $1/16$ fraction of the expected profit of the optimal offline algorithm. We also show that for this setting no online algorithm can yield a better than $1/3$ approximation, and thus establish a formal separation from the i.i.d. case. On the other hand, we present a threshold-based online algorithm for this setting that yields a $1/2-o(1)$ approximation. For non-i.i.d. prices no approximation is possible. We use the results for these base cases to solve a variety of more complex settings. For instance, we show a $1/2-o(1)$ approximation for settings where prices are affiliated and the online algorithm has only access to a single sample. We also extend our upper and lower bounds for the single item case to $k$ items, and thus in particular show that it is impossible to achieve $1-o(1)$ approximations. For the budgeted version, where fractions of an item can be bought, and gains can be reinvested, we show a constant-factor approximation to the optimal offline algorithm's growth rate. In a setting with $k$ item types and price streams, we achieve a $\Omega(1/k)$ approximation for the unit-capacity case, which is optimal.
We study the classic single-choice prophet inequality problem through a resource augmentation lens. Our goal is to bound the $(1-\varepsilon)$-competition complexity of different types of online algorithms. This metric asks …
We study the classic single-choice prophet inequality problem through a resource augmentation lens. Our goal is to bound the $(1-\varepsilon)$-competition complexity of different types of online algorithms. This metric asks for the smallest $k$ such that the expected value of the online algorithm on $k$ copies of the original instance, is at least a $(1-\varepsilon)$-approximation to the expected offline optimum on a single copy. We show that block threshold algorithms, which set one threshold per copy, are optimal and give a tight bound of $k = \Theta(\log \log 1/\varepsilon)$. This shows that block threshold algorithms approach the offline optimum doubly-exponentially fast. For single threshold algorithms, we give a tight bound of $k = \Theta(\log 1/\varepsilon)$, establishing an exponential gap between block threshold algorithms and single threshold algorithms. Our model and results pave the way for exploring resource-augmented prophet inequalities in combinatorial settings. In line with this, we present preliminary findings for bipartite matching with one-sided vertex arrivals, as well as in XOS combinatorial auctions. Our results have a natural competition complexity interpretation in mechanism design and pricing applications.
When launching new products, firms face uncertainty about market reception. Online reviews provide valuable information not only to consumers but also to firms, allowing firms to adjust the product characteristics, …
When launching new products, firms face uncertainty about market reception. Online reviews provide valuable information not only to consumers but also to firms, allowing firms to adjust the product characteristics, including its selling price. In this paper, we consider a pricing model with online reviews in which the quality of the product is uncertain, and both the seller and the buyers Bayesianly update their beliefs to make purchasing & pricing decisions. We model the seller's pricing problem as a basic bandits' problem and show a close connection with the celebrated Catalan numbers, allowing us to efficiently compute the overall future discounted reward of the seller. With this tool, we analyze and compare the optimal static and dynamic pricing strategies in terms of the probability of effectively learning the quality of the product.
Apportionment is the act of distributing the seats of a legislature among political parties (or states) in proportion to their vote shares (or populations). A famous impossibility by Balinski and …
Apportionment is the act of distributing the seats of a legislature among political parties (or states) in proportion to their vote shares (or populations). A famous impossibility by Balinski and Young (2001) shows that no apportionment method can be proportional up to one seat (quota) while also responding monotonically to changes in the votes (population monotonicity). Grimmett (2004) proposed to overcome this impossibility by randomizing the apportionment, which can achieve quota as well as perfect proportionality and monotonicity -- at least in terms of the expected number of seats awarded to each party. Still, the correlations between the seats awarded to different parties may exhibit bizarre non-monotonicities. When parties or voters care about joint events, such as whether a coalition of parties reaches a majority, these non-monotonicities can cause paradoxes, including incentives for strategic voting. In this paper, we propose monotonicity axioms ruling out these paradoxes, and study which of them can be satisfied jointly with Grimmett's axioms. Essentially, we require that, if a set of parties all receive more votes, the probability of those parties jointly receiving more seats should increase. Our work draws on a rich literature on unequal probability sampling in statistics (studied as dependent randomized rounding in computer science). Our main result shows that a sampling scheme due to Sampford (1967) satisfies Grimmett's axioms and a notion of higher-order correlation monotonicity.
A fundamental economic question is that of designing revenue-maximizing mechanisms in dynamic environments. This paper considers a simple yet compelling market model to tackle this question, where forward-looking buyers arrive …
A fundamental economic question is that of designing revenue-maximizing mechanisms in dynamic environments. This paper considers a simple yet compelling market model to tackle this question, where forward-looking buyers arrive at the market over discrete time periods, and a monopolistic seller is endowed with a limited supply of a single good. In the case of i.i.d. and regular valuations for the buyers, Board and Skrzypacz (2016) characterized the optimal mechanism and proved the optimality of posted prices in the continuous-time limit. Our main result considers the limit case of a continuum of buyers, establishing that for arbitrary independent buyers' valuations, posted prices and capacity rationing can implement the optimal anonymous mechanism. Our result departs from the literature in three ways: It does not make any regularity assumptions, it considers the case of general, not necessarily i.i.d., arrivals, and finally, not only posted prices but also capacity rationing takes part in the optimal mechanism. Additionally, if supply is unlimited, we show that the rationing effect vanishes, and the optimal mechanism can be implemented using posted prices only, \`a la Board (2008).
How to elect the representatives in legislative bodies is a question that every modern democracy has to answer. This design task has to consider various elements so as to fulfill …
How to elect the representatives in legislative bodies is a question that every modern democracy has to answer. This design task has to consider various elements so as to fulfill the citizens' expectations and contribute to the maintenance of a healthy democracy. The notion of proportionality, in that the support of a given idea in the house should be nearly proportional to its support in the general public, lies at the core of this design task. In the last decades, demographic aspects beyond political support have been incorporated by requiring that they are also fairly represented in the body, giving rise to a multidimensional version of the apportionment problem. In this work, we provide an axiomatic justification for a recently proposed notion of multidimensional proportionality and extend it to encompass two relevant constraints often used in electoral systems: a threshold on the number of votes that a list needs in order to be eligible and the election of the most-voted candidate in each district. We then build upon these results to design methods based on multidimensional proportionality. We use the Chilean Constitutional Convention election (May 15-16, 2021) results as a testing ground -- where the dimensions are given by political lists, districts, and genders -- and compare the apportionment obtained under each method according to three criteria: proportionality, representativeness, and voting power. While local and global methods exhibit a natural trade-off between local and global proportionality, including the election of most-voted candidates on top of methods based on 3-dimensional proportionality allows us to incorporate both notions while ensuring higher levels of representativeness and a balanced voting power.
The apportionment problem constitutes a fundamental problem in democratic societies: How to distribute a fixed number of seats among a set of states in proportion to the states' populations? This--seemingly …
The apportionment problem constitutes a fundamental problem in democratic societies: How to distribute a fixed number of seats among a set of states in proportion to the states' populations? This--seemingly simple--task has led to a rich literature and has become well known in the context of the US House of Representatives. In this paper, we connect the design of monotone apportionment methods to classic problems from discrete geometry and combinatorial optimization and explore the extent to which randomization can enhance proportionality. We first focus on the well-studied family of stationary divisor methods, which satisfy the strong population monotonicity property, and show that this family produces only a slightly superlinear number of different outputs as a function of the number of states. While our upper and lower bounds leave a small gap, we show that--surprisingly--closing this gap would solve a long-standing open problem from discrete geometry, known as the complexity of $k$-levels in line arrangements. The main downside of divisor methods is their violation of the quota axiom, i.e., every state should receive $\lfloor q_i\rfloor$ or $\lceil q_i\rceil$ seats, where $q_i$ is the proportional share of the state. As we show that randomizing over divisor methods can only partially overcome this issue, we propose a relaxed version of divisor methods in which the total number of seats may slightly deviate from the house size. By randomizing over them, we can simultaneously satisfy population monotonicity, quota, and ex-ante proportionality. Finally, we turn our attention to quota-compliant methods that are house-monotone, i.e., no state may lose a seat when the house size is increased. We provide a polyhedral characterization based on network flows, which implies a simple description of all ex-ante proportional randomized methods that are house-monotone and quota-compliant.
The apportionment problem constitutes a fundamental problem in democratic societies: How to distribute a fixed number of seats among a set of states in proportion to the states' populations? This--seemingly …
The apportionment problem constitutes a fundamental problem in democratic societies: How to distribute a fixed number of seats among a set of states in proportion to the states' populations? This--seemingly simple--task has led to a rich literature and has become well known in the context of the US House of Representatives. In this paper, we connect the design of monotone apportionment methods to classic problems from discrete geometry and combinatorial optimization and explore the extent to which randomization can enhance proportionality. We first focus on the well-studied family of stationary divisor methods, which satisfy the strong population monotonicity property, and show that this family produces only a slightly superlinear number of different outputs as a function of the number of states. While our upper and lower bounds leave a small gap, we show that--surprisingly--closing this gap would solve a long-standing open problem from discrete geometry, known as the complexity of $k$-levels in line arrangements. The main downside of divisor methods is their violation of the quota axiom, i.e., every state should receive $\lfloor q_i\rfloor$ or $\lceil q_i\rceil$ seats, where $q_i$ is the proportional share of the state. As we show that randomizing over divisor methods can only partially overcome this issue, we propose a relaxed version of divisor methods in which the total number of seats may slightly deviate from the house size. By randomizing over them, we can simultaneously satisfy population monotonicity, quota, and ex-ante proportionality. Finally, we turn our attention to quota-compliant methods that are house-monotone, i.e., no state may lose a seat when the house size is increased. We provide a polyhedral characterization based on network flows, which implies a simple description of all ex-ante proportional randomized methods that are house-monotone and quota-compliant.
A fundamental economic question is that of designing revenue-maximizing mechanisms in dynamic environments. This paper considers a simple yet compelling market model to tackle this question, where forward-looking buyers arrive …
A fundamental economic question is that of designing revenue-maximizing mechanisms in dynamic environments. This paper considers a simple yet compelling market model to tackle this question, where forward-looking buyers arrive at the market over discrete time periods, and a monopolistic seller is endowed with a limited supply of a single good. In the case of i.i.d. and regular valuations for the buyers, Board and Skrzypacz (2016) characterized the optimal mechanism and proved the optimality of posted prices in the continuous-time limit. Our main result considers the limit case of a continuum of buyers, establishing that for arbitrary independent buyers' valuations, posted prices and capacity rationing can implement the optimal anonymous mechanism. Our result departs from the literature in three ways: It does not make any regularity assumptions, it considers the case of general, not necessarily i.i.d., arrivals, and finally, not only posted prices but also capacity rationing takes part in the optimal mechanism. Additionally, if supply is unlimited, we show that the rationing effect vanishes, and the optimal mechanism can be implemented using posted prices only, \`a la Board (2008).
How to elect the representatives in legislative bodies is a question that every modern democracy has to answer. This design task has to consider various elements so as to fulfill …
How to elect the representatives in legislative bodies is a question that every modern democracy has to answer. This design task has to consider various elements so as to fulfill the citizens' expectations and contribute to the maintenance of a healthy democracy. The notion of proportionality, in that the support of a given idea in the house should be nearly proportional to its support in the general public, lies at the core of this design task. In the last decades, demographic aspects beyond political support have been incorporated by requiring that they are also fairly represented in the body, giving rise to a multidimensional version of the apportionment problem. In this work, we provide an axiomatic justification for a recently proposed notion of multidimensional proportionality and extend it to encompass two relevant constraints often used in electoral systems: a threshold on the number of votes that a list needs in order to be eligible and the election of the most-voted candidate in each district. We then build upon these results to design methods based on multidimensional proportionality. We use the Chilean Constitutional Convention election (May 15-16, 2021) results as a testing ground -- where the dimensions are given by political lists, districts, and genders -- and compare the apportionment obtained under each method according to three criteria: proportionality, representativeness, and voting power. While local and global methods exhibit a natural trade-off between local and global proportionality, including the election of most-voted candidates on top of methods based on 3-dimensional proportionality allows us to incorporate both notions while ensuring higher levels of representativeness and a balanced voting power.
The secretary problem is probably the most well-studied optimal stopping problem with many applications in economics and management. In the secretary problem, a decision maker faces an unknown sequence of …
The secretary problem is probably the most well-studied optimal stopping problem with many applications in economics and management. In the secretary problem, a decision maker faces an unknown sequence of values, revealed successively, and has to make irrevocable take-it-or-leave-it decisions. Her goal is to select the maximum value in the sequence. Although in the classic secretary problem the values of upcoming elements are entirely unknown, in many realistic situations, the decision maker has access to some information, for example, from past data. In this paper, we take a sampling approach and assume that before starting the sequence, each element is sampled independently with probability p. We study both the adversarial and the random arrival models. Our main result is to obtain the best possible algorithms for both settings and all values of p. As p grows to one, the obtained guarantees converge to the optimal guarantees in the full information case. Notably, we establish that the best possible algorithm in the adversarial order setting is a fixed threshold algorithm. In the random order setting, we characterize the best possible algorithm by a sequence of thresholds, dictating at which point in time we should accept a value. Surprisingly, this sequence is independent of p. We complement our theoretical results with numerical experiments on data of people playing the secretary problem repeatedly. Our results help explain some behavioral issues they raised and indicate that people play a strategy similar to our optimal algorithms from the start onwards, albeit slightly suboptimally. This paper was accepted by Chung Piaw Teo, optimization. Funding: This work was partially funded by the Agencia Nacional de Investigación y Desarrollo Chile [Grants FB210005 (Centro de Modelamiento Matamático), AIM23_0004 (Millenium Institute for Research in Market Imperfections and Public Policy), AFB230002 (Instituto Sistemas Complejos de Ingeniería), ACT210005, and Fondo Nacional de Desarrollo Científico y Tecnológico 1220054], the Alexander von Humboldt Foundation with funds from the German Federal Ministry of Education and Research, and the German Research Foundation within the Research Training Group Advanced Optimization in a Networked Economy [Grant Graduiertenkolleg 2201]. The views expressed in this paper do not necessarily reflect those of the European Central Bank or the Eurosystem. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2021.01580 .
Apportionment is the act of distributing the seats of a legislature among political parties (or states) in proportion to their vote shares (or populations). A famous impossibility by Balinski and …
Apportionment is the act of distributing the seats of a legislature among political parties (or states) in proportion to their vote shares (or populations). A famous impossibility by Balinski and Young (2001) shows that no apportionment method can be proportional up to one seat (quota) while also responding monotonically to changes in the votes (population monotonicity). Grimmett (2004) proposed to overcome this impossibility by randomizing the apportionment, which can achieve quota as well as perfect proportionality and monotonicity -- at least in terms of the expected number of seats awarded to each party. Still, the correlations between the seats awarded to different parties may exhibit bizarre non-monotonicities. When parties or voters care about joint events, such as whether a coalition of parties reaches a majority, these non-monotonicities can cause paradoxes, including incentives for strategic voting. In this paper, we propose monotonicity axioms ruling out these paradoxes, and study which of them can be satisfied jointly with Grimmett's axioms. Essentially, we require that, if a set of parties all receive more votes, the probability of those parties jointly receiving more seats should increase. Our work draws on a rich literature on unequal probability sampling in statistics (studied as dependent randomized rounding in computer science). Our main result shows that a sampling scheme due to Sampford (1967) satisfies Grimmett's axioms and a notion of higher-order correlation monotonicity.
When launching new products, firms face uncertainty about market reception. Online reviews provide valuable information not only to consumers but also to firms, allowing firms to adjust the product characteristics, …
When launching new products, firms face uncertainty about market reception. Online reviews provide valuable information not only to consumers but also to firms, allowing firms to adjust the product characteristics, including its selling price. In this paper, we consider a pricing model with online reviews in which the quality of the product is uncertain, and both the seller and the buyers Bayesianly update their beliefs to make purchasing & pricing decisions. We model the seller's pricing problem as a basic bandits' problem and show a close connection with the celebrated Catalan numbers, allowing us to efficiently compute the overall future discounted reward of the seller. With this tool, we analyze and compare the optimal static and dynamic pricing strategies in terms of the probability of effectively learning the quality of the product.
We study the classic single-choice prophet inequality problem through a resource augmentation lens. Our goal is to bound the $(1-\varepsilon)$-competition complexity of different types of online algorithms. This metric asks …
We study the classic single-choice prophet inequality problem through a resource augmentation lens. Our goal is to bound the $(1-\varepsilon)$-competition complexity of different types of online algorithms. This metric asks for the smallest $k$ such that the expected value of the online algorithm on $k$ copies of the original instance, is at least a $(1-\varepsilon)$-approximation to the expected offline optimum on a single copy. We show that block threshold algorithms, which set one threshold per copy, are optimal and give a tight bound of $k = \Theta(\log \log 1/\varepsilon)$. This shows that block threshold algorithms approach the offline optimum doubly-exponentially fast. For single threshold algorithms, we give a tight bound of $k = \Theta(\log 1/\varepsilon)$, establishing an exponential gap between block threshold algorithms and single threshold algorithms. Our model and results pave the way for exploring resource-augmented prophet inequalities in combinatorial settings. In line with this, we present preliminary findings for bipartite matching with one-sided vertex arrivals, as well as in XOS combinatorial auctions. Our results have a natural competition complexity interpretation in mechanism design and pricing applications.
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially …
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially gets to sample each of N items independently with probability p, and can observe the relative rankings of these sampled items. Then, the DM faces the remaining items in an online fashion, observing the relative rankings of all revealed items. While scanning the sequence the DM makes irrevocable stop/continue decisions and her reward for stopping the sequence facing the item with rank i is Y i . The goal of the DM is to maximize her reward. We start by studying the case in which the values Y i are known to the DM, and then move to the case in which these values are adversarial. For the former case we are able to recover several classic results in the area, thus giving a unifying framework for single selection optimal stopping. For the latter, we pin down the optimal algorithm, obtaining the optimal competitive ratios for all values of p. Funding: This work was partially supported by The Center for Mathematical Modeling at the University of Chile (ANID FB210005), Grant Anillo Information and Computation in Market Design (ANID ACT210005), FONDECYT 1220054 and 1181180, and a Meta Research PhD Fellowship.
In this work we initiate the study of buy-and-sell prophet inequalities. We start by considering what is arguably the most fundamental setting. In this setting the online algorithm observes a …
In this work we initiate the study of buy-and-sell prophet inequalities. We start by considering what is arguably the most fundamental setting. In this setting the online algorithm observes a sequence of prices one after the other. At each time step, the online algorithm can decide to buy and pay the current price if it does not hold the item already; or it can decide to sell and collect the current price as a reward if it holds the item. We show that for i.i.d. prices a single-threshold online algorithm achieves at least $1/2$ of the expected profit of the optimal offline algorithm and we prove that this is optimal. For non-i.i.d. prices in random order, where prices are no longer independent, we give a single-threshold online algorithm that achieves at least a $1/16$ fraction of the expected profit of the optimal offline algorithm. We also show that for this setting no online algorithm can yield a better than $1/3$ approximation, and thus establish a formal separation from the i.i.d. case. On the other hand, we present a threshold-based online algorithm for this setting that yields a $1/2-o(1)$ approximation. For non-i.i.d. prices no approximation is possible. We use the results for these base cases to solve a variety of more complex settings. For instance, we show a $1/2-o(1)$ approximation for settings where prices are affiliated and the online algorithm has only access to a single sample. We also extend our upper and lower bounds for the single item case to $k$ items, and thus in particular show that it is impossible to achieve $1-o(1)$ approximations. For the budgeted version, where fractions of an item can be bought, and gains can be reinvested, we show a constant-factor approximation to the optimal offline algorithm's growth rate. In a setting with $k$ item types and price streams, we achieve a $\Omega(1/k)$ approximation for the unit-capacity case, which is optimal.
A fluid queuing network constitutes one of the simplest models in which to study flow dynamics over a network. In this model we have a single source-sink pair and each …
A fluid queuing network constitutes one of the simplest models in which to study flow dynamics over a network. In this model we have a single source-sink pair and each link has a per-time-unit capacity and a transit time. A dynamic equilibrium (or equilibrium flow over time) is a flow pattern over time such that no flow particle has incentives to unilaterally change its path. Although the model has been around for almost fifty years, only recently results regarding existence and characterization of equilibria have been obtained. In particular the long term behavior remains poorly understood. Our main result in this paper is to show that, under a natural (and obviously necessary) condition on the queuing capacity, a dynamic equilibrium reaches a steady state (after which queue lengths remain constant) in finite time. Previously, it was not even known that queue lengths would remain bounded. The proof is based on the analysis of a rather non-obvious potential function that turns out to be monotone along the evolution of the equilibrium. Furthermore, we show that the steady state is characterized as an optimal solution of a certain linear program. When this program has a unique solution, which occurs generically, the long term behavior is completely predictable. On the contrary, if the linear program has multiple solutions the steady state is more difficult to identify as it depends on the whole temporal evolution of the equilibrium.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is …
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. Here, the optimal algorithm picks the maximum value with probability at least 1/e. However, by varying the available information, new interesting problems arise. For instance, in the full information variant of the secretary problem the values are i.i.d. samples from a known distribution. Naturally, the best possible success probability increases and turns out to be approximately 0.58. Also, the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature.In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability p. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with p-sampling (ROSp for short) and the adversarial order secretary problem with p-sampling (AOSp for short). Our main result is to obtain best possible algorithms for both problems and all values of p. As p grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of p only. Therefore, even knowledge of the total number of elements is unnecessary. Proving that this algorithm is optimal involves a novel technique, which boils down to analyzing a related game in a conflict graph over binary sequences. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements. Surprisingly, this sequence of time thresholds arises from a separable and convex optimization problem whose solution is independent of p.
A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who sequentially observes random variables X1, . . . , Xn and …
A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who sequentially observes random variables X1, . . . , Xn and selects one of them is at least an α fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional βn samples for some β ≥ 0. We first give improved lower bounds on α for a wide range of values of β; specifically, α ≥ (1 + β)/e when β ≤ 1/(e − 1), which is tight, and α ≥ 0.648 when β = 1, which improves on a bound of around 0.635 due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of 1/e for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an …
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an $α$ fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional $βn$ samples for some $β\geq 0$. We first give improved lower bounds on $α$ for a wide range of values of $β$; specifically, $α\geq(1+β)/e$ when $β\leq 1/(e-1)$, which is tight, and $α\geq 0.648$ when $β=1$, which improves on a bound of around $0.635$ due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of $1/e$ for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is …
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. However, by varying the available information, new interesting problems arise. Also the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature. In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability $p$. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with $p$-sampling (ROS$p$ for short) and the adversarial order secretary problem with $p$-sampling (AOS$p$ for short). Our main result is to obtain best possible algorithms for both problems and all values of $p$. As $p$ grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of $p$ only. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements.
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially …
We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially gets to sample each of $N$ items independently with probability $p$, and can observe the relative rankings of these sampled items. Then, the DM faces the remaining items in an online fashion, observing the relative rankings of all revealed items. While scanning the sequence the DM makes irrevocable stop/continue decisions and her reward for stopping the sequence facing the item with rank $i$ is $Y_i$. The goal of the DM is to maximize her reward. We start by studying the case in which the values $Y_i$ are known to the DM, and then move to the case in which these values are adversarial. For the former case, we write the natural linear program that captures the performance of an algorithm, and take its continuous limit. We prove a structural result about this continuous limit, which allows us to reduce the problem to a relatively simple real optimization problem. We establish that the optimal algorithm is given by a sequence of thresholds $t_1\le t_2\le\cdots$ such that the DM should stop if seeing an item with current ranking $i$ after time $t_i$. Additionally we are able to recover several classic results in the area such as those for secretary problem and the minimum ranking problem. For the adversarial case, we obtain a similar linear program with an additional stochastic dominance constraint. Using the same machinery we are able to pin down the optimal competitive ratios for all values of $p$. Notably, we prove that as $p$ approaches 1, our guarantee converges linearly to 0.745, matching that of the i.i.d.~prophet inequality. Also interesting is the case $p=1/2$, where our bound evaluates to $0.671$, which improves upon the state of the art.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)The Two-Sided Game of Googol and Sample-Based Prophet InequalitiesJosé R. Correa, Andrés Cristi, Boris Epstein, …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)The Two-Sided Game of Googol and Sample-Based Prophet InequalitiesJosé R. Correa, Andrés Cristi, Boris Epstein, and José A. SotoJosé R. Correa, Andrés Cristi, Boris Epstein, and José A. Sotopp.2066 - 2081Chapter DOI:https://doi.org/10.1137/1.9781611975994.127PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 – 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given $n$ cards with arbitrary non-negative numbers written on both sides. The cards are randomly placed on $n$ consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card.
We study algorithms for two natural objectives. In the first one, as in the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least $0.45292$. In the second one, similar to the prophet inequality, the player maximizes the expectation of the selected hidden value. We show a guarantee of at least $0.63518$ with respect to the expected maximum hidden value.
Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial $n$ visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently $n$ visible numbers in the table. And the third one is similar to the latter but it additionally requires that the last flipped value is larger than the value on the other side of its card.
We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. Our guarantee improves upon $1-1/e$ for this problem, which is the currently best known guarantee and only works for the i.i.d. case.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see …
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 − 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multi-threshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see …
In the classic prophet inequality, a problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler that knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Garling [16], established that this worst case ratio is a constant and that 1/2 is the best possible such constant. In the last decade the theory of prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A particularly interesting variant is the so-called Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of 1 – 1/e and very recently this barrier was slightly improved by Azar et al. [3].In this paper we derive a way of analyzing multithreshold strategies that basically sets a nonincreasing sequence of thresholds to be applied at different times. The gambler will thus stop the first time a sample surpasses the corresponding threshold. Specifically we consider a class of very robust strategies that we call blind quantile strategies. These constitute a clever generalization of single threshold strategies and consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that these strategies can achieve a constant of 0.669 in the Prophet Secretary problem, improving upon the best known result of Azar et al. [3], and even that of Beyhaghi et al. [4] that works in the case the gambler can select the order of the samples. The crux of the analysis is a very precise analysis of the underlying stopping time distribution for the gambler's strategy that is inspired by the theory of Schur convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675.Finally we prove that no nonadaptive algorithm for the gambler can achieve a constant better than 0.732, which also improves upon a recent result of Azar et al. [3]. Here, a nonadaptive algorithm is an algorithm whose decision to stop can depend on the index of the random variable being sampled, on the value sampled, and on the time, but not on the history that has been observed.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given $n$ cards with arbitrary non-negative numbers written on both sides. The cards are randomly placed on $n$ consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, as in the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least $0.45292$. In the second one, similar to the prophet inequality, the player maximizes the expectation of the selected hidden value. We show a guarantee of at least $0.63518$ with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial $n$ visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently $n$ visible numbers in the table. And the third one is similar to the latter but it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. Our guarantee improves upon $1-1/e$ for this problem, which is the currently best known guarantee and only works for the i.i.d. case.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017].
The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017]. The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017]. The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.
In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick …
In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Gairing (1977) established that this worst case ratio is a universal constant equal to 1/2. In the last decade prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A very interesting variant is the Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms achieve a constant of 1-1/e and very recently this barrier was slightly improved. This paper analyzes strategies that set a nonincreasing sequence of thresholds to be applied at different times. The gambler stops the first time a sample surpasses the corresponding threshold. Specifically we consider a class of strategies called blind quantile strategies. They consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that they can achieve a constant of 0.665, improving upon the best known result of Azar et al. (2018), and on Beyhaghi et al. (2018) (order selection). Our proof analyzes precisely the underlying stopping time distribution, relying on Schur-convexity theory. We further prove that blind strategies cannot achieve better than 0.675. Finally we prove that no algorithm for the gambler can achieve better than 0.732.
Public transit systems in major urban areas usually operate under deficits and therefore require significant subsidies. An important cause of this deficit, particularly in the developing world, is the high …
Public transit systems in major urban areas usually operate under deficits and therefore require significant subsidies. An important cause of this deficit, particularly in the developing world, is the high fare evasion rate mainly due to an ineffective control policy or the lack of it. In this paper we study new models for optimizing fare inspection strategies in transit networks based on bilevel programming. In the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times. To model the followers’ behavior we study both a nonadaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximation algorithms, and we prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and nonadaptive strategies. For the leader’s optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator’s will. For the latter variant, we design an LP-based approximation algorithm. Finally, employing a local search procedure that shifts inspection probabilities within an initially determined support set, we perform an extensive computational study for all variants of the problem on instances of the Dutch railway and the Amsterdam subway network. This study reveals that our solutions are within 5% of theoretical upper bounds drawn from the LP relaxation. We also derive exact nonlinear programming formulations for all variants of the leader’s problem and use them to obtain exact solutions for small instance sizes. The e-companion is available at https://doi.org/10.1287/opre.2016.1560 .
Flows over time provide a natural and convenient description for the dynamics of a continuous stream of particles traveling from a source to a sink in a network, allowing to …
Flows over time provide a natural and convenient description for the dynamics of a continuous stream of particles traveling from a source to a sink in a network, allowing to track the progress of each infinitesimal particle along time. A basic model for the propagation of flow is the so-called fluid queue model in which the time to traverse an edge is composed of a flow-dependent waiting time in a queue at the entrance of the edge plus a constant travel time after leaving the queue. In a dynamic network routing game each infinitesimal particle is interpreted as a player that seeks to complete its journey in the least possible time. Players are forward looking and anticipate the congestion and queuing delays induced by others upon arrival to any edge in the network. Equilibrium occurs when each particle travels along a shortest path. This paper is concerned with the study of equilibria in the fluid queue model and provides a constructive proof of existence and uniqueness of equilibria in single origin-destination networks with piecewise constant inflow rate. This is done through a detailed analysis of the underlying static flows obtained as derivatives of a dynamic equilibrium. Furthermore, for multicommodity networks, we give a general nonconstructive proof of existence of equilibria when the inflow rates belong to L p .
After a sequence of improvements Boyd et al. [TSP on cubic and subcubic graphs, Integer Programming and Combinatorial Optimization, Lecture Notes in Comput. Sci. 6655, Springer, Heidelberg, 2011, pp. 65--77] …
After a sequence of improvements Boyd et al. [TSP on cubic and subcubic graphs, Integer Programming and Combinatorial Optimization, Lecture Notes in Comput. Sci. 6655, Springer, Heidelberg, 2011, pp. 65--77] proved that any 2-connected graph whose $n$ vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4/3)n, establishing in particular that the integrality gap of the subtour LP is at most 4/3 for cubic 2-connected graphs and matching the conjectured value of the famous 4/3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4/3-1/61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4/3. With the previous result, and by considering an even smaller $\epsilon$, we show that the integrality gap of the TSP relaxation is at most $4/3- \epsilon$ even if the graph is not 2-connected (i.e., for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4/3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4/3 - 1/18)n.
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies …
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies in transit networks based on bilevel programming. In the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times. To model the followers' behavior we study both a non-adaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximation algorithms and we prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and non-adaptive strategies. For the leader's optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator's will. For the latter variant, we design an LP based approximation algorithm. Finally, using a local search procedure that shifts inspection probabilities within an initially determined support set, we perform an extensive computational study for all variants of the problem on instances of the Dutch railway and the Amsterdam subway network. This study reveals that our solutions are within 95% of theoretical upper bounds drawn from the LP relaxation.
This paper continues the study of equilibria for flows over time in the fluid queueing model recently considered by Koch and Skutella [10]. We provide a constructive proof for the …
This paper continues the study of equilibria for flows over time in the fluid queueing model recently considered by Koch and Skutella [10]. We provide a constructive proof for the existence and uniqueness of equilibria in the case of a single origin-destination with piecewise constant inflow rates, through a detailed analysis of the static flows obtained as derivatives of a dynamic equilibrium. We also give a nonconstructive existence proof of equilibria when the inflow rates belong to $L^p$ including the extension to multiple origin-destinations.
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies …
Public transit systems in urban areas usually require large state subsidies, primarily due to high fare evasion rates. In this paper, we study new models for optimizing fare inspection strategies in transit networks based on bilevel programming. In the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times. To model the followers' behavior we study both a non-adaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximation algorithms and we prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and non-adaptive strategies. For the leader's optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator's will. For the latter variant, we design an LP based approximation algorithm. Finally, using a local search procedure that shifts inspection probabilities within an initially determined support set, we perform an extensive computational study for all variants of the problem on instances of the Dutch railway and the Amsterdam subway network. This study reveals that our solutions are within 95% of theoretical upper bounds drawn from the LP relaxation.
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a …
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4/3)n, establishing in particular that the integrality gap of the subtour LP is at most 4/3 for cubic 2-connected graphs and matching the conjectured value of the famous 4/3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4/3 - 1/61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4/3. With the previous result, and by considering an even smaller epsilon, we show that the integrality gap of the TSP relaxation is at most 4/3 - epsilon, even if the graph is not 2-connected (i.e. for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4/3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4/3 - 1/18)n.
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- …
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- tion since the sixties, when Wegner conjectured that the corresponding duality gap, i.e., the maximum possible ratio between the maximum independent set and the minimum hitting set (MHS), is bounded by a universal constant. An interesting special case, that may prove use- ful to tackling the general problem, is the diagonal-intersecting case, in which the given family of rectangles is intersected by a diagonal. Indeed, Chepoi and Felsner recently gave a factor 6 approximation algorithm for MHS in this setting, and showed that the duality gap is between 3/2 and 6. In this paper we improve upon these results. First we show that MIS in diagonal-intersecting families is NP-complete, providing one smallest subclass for which MIS is provably hard. Then, we derive an $O(n^2)$-time algorithm for the maximum weight independent set when, in addition the rectangles intersect below the diagonal. This improves and extends a classic result of Lubiw, and amounts to obtain a 2-approximation algo- rithm for the maximum weight independent set of rectangles intersecting a diagonal. Finally, we prove that for diagonal-intersecting families the duality gap is between 2 and 4. The upper bound, which implies an approximation algorithm of the same factor, follows from a simple com- binatorial argument, while the lower bound represents the best known lower bound on the duality gap, even in the general case.
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- …
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- tion since the sixties, when Wegner conjectured that the corresponding duality gap, i.e., the maximum possible ratio between the maximum independent set and the minimum hitting set (MHS), is bounded by a universal constant. An interesting special case, that may prove use- ful to tackling the general problem, is the diagonal-intersecting case, in which the given family of rectangles is intersected by a diagonal. Indeed, Chepoi and Felsner recently gave a factor 6 approximation algorithm for MHS in this setting, and showed that the duality gap is between 3/2 and 6. In this paper we improve upon these results. First we show that MIS in diagonal-intersecting families is NP-complete, providing one smallest subclass for which MIS is provably hard. Then, we derive an $O(n^2)$-time algorithm for the maximum weight independent set when, in addition the rectangles intersect below the diagonal. This improves and extends a classic result of Lubiw, and amounts to obtain a 2-approximation algo- rithm for the maximum weight independent set of rectangles intersecting a diagonal. Finally, we prove that for diagonal-intersecting families the duality gap is between 2 and 4. The upper bound, which implies an approximation algorithm of the same factor, follows from a simple com- binatorial argument, while the lower bound represents the best known lower bound on the duality gap, even in the general case.
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- …
Finding a maximum independent set (MIS) of a given fam- ily of axis-parallel rectangles is a basic problem in computational geom- etry and combinatorics. This problem has attracted significant atten- tion since the sixties, when Wegner conjectured that the corresponding duality gap, i.e., the maximum possible ratio between the maximum independent set and the minimum hitting set (MHS), is bounded by a universal constant. An interesting special case, that may prove use- ful to tackling the general problem, is the diagonal-intersecting case, in which the given family of rectangles is intersected by a diagonal. Indeed, Chepoi and Felsner recently gave a factor 6 approximation algorithm for MHS in this setting, and showed that the duality gap is between 3/2 and 6. In this paper we improve upon these results. First we show that MIS in diagonal-intersecting families is NP-complete, providing one smallest subclass for which MIS is provably hard. Then, we derive an $O(n^2)$-time algorithm for the maximum weight independent set when, in addition the rectangles intersect below the diagonal. This improves and extends a classic result of Lubiw, and amounts to obtain a 2-approximation algo- rithm for the maximum weight independent set of rectangles intersecting a diagonal. Finally, we prove that for diagonal-intersecting families the duality gap is between 2 and 4. The upper bound, which implies an approximation algorithm of the same factor, follows from a simple com- binatorial argument, while the lower bound represents the best known lower bound on the duality gap, even in the general case.
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a …
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4/3)n, establishing in particular that the integrality gap of the subtour LP is at most 4/3 for cubic 2-connected graphs and matching the conjectured value of the famous 4/3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4/3 - 1/61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4/3. With the previous result, and by considering an even smaller epsilon, we show that the integrality gap of the TSP relaxation is at most 4/3 - epsilon, even if the graph is not 2-connected (i.e. for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4/3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4/3 - 1/18)n.
We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that …
We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain these approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given fixed assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed.
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local …
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. With this method we are able to prove the following results.
First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed.
We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of 2.618 for the weighted sum of completion times, and an approximation ratio of 2.5 in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but randomized policy. We show that it achieves an approximation ratio of at most 2.13; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to \pi /2.
Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling that achieves a factor of 2+ \epsilon for any \epsilon > 0.
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local …
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. With this method we are able to prove the following results. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed. We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of 2.618 for the weighted sum of completion times, and an approximation ratio of 2.5 in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but randomized policy. We show that it achieves an approximation ratio of at most 2.13; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to \pi /2. Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling that achieves a factor of 2+ \epsilon for any \epsilon > 0.
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and …
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and if $T_n$ is the set of stop rules for $X_1, \cdots, X_n$, then $E(\max\{X_1, \cdots, X_n\}) \leq a_n \sup\{EX_t: t \in T_n\}$, and the bound $a_n$ is best possible. Similar universal constants $0 < b_n < \frac{1}{4}$ are found so that if the $\{X_i\}$ are i.i.d. random variables taking values only in $\lbrack a, b\rbrack$, then $E(\max\{X_1, \cdots, X_n\}) \leq \sup\{EX_t: t \in T_n\} + b_n(b - a)$, where again the bound $b_n$ is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items …
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [33] and Feldman et al. [17] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1 – 1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [45] and Esfandiari et al. [15] who worked in the special cases where we can fully control the arrival order or when there is only a single item.Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their …
Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 - 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e ≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from …
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined …
Let $X_i \geq 0$ be independent, $i = 1, \cdots, n$, and $X^\ast_n = \max(X_1, \cdots, X_n)$. Let $t(c) (s(c))$ be the threshold stopping rule for $X_1, \cdots, X_n$, defined by $t(c) = \text{smallest} i$ for which $X_i \geq c(s(c) = \text{smallest} i$ for which $X_i > c), = n$ otherwise. Let $m$ be a median of the distribution of $X^\ast_n$. It is shown that for every $n$ and $\underline{X}$ either $EX^\ast_n \leq 2EX_{t(m)}$ or $EX^\ast_n \leq 2EX_{s(m)}$. This improves previously known results, [1], [4]. Some results for i.i.d. $X_i$ are also included.
The setting of the classic prophet inequality is as follows: a gambler is shown the probability distributions of $n$ independent, non-negative random variables with finite expectations. In their indexed order, …
The setting of the classic prophet inequality is as follows: a gambler is shown the probability distributions of $n$ independent, non-negative random variables with finite expectations. In their indexed order, a value is drawn from each distribution, and after every draw the gambler may choose to accept the value and end the game, or discard the value permanently and continue the game. What is the best performance that the gambler can achieve in comparison to a prophet who can always choose the highest value? Krengel, Sucheston, and Garling solved this problem in 1978, showing that there exists a strategy for which the gambler can achieve half as much reward as the prophet in expectation. Furthermore, this result is tight. In this work, we consider a setting in which the gambler is allowed much less information. Suppose that the gambler can only take one sample from each of the distributions before playing the game, instead of knowing the full distributions. We provide a simple and intuitive algorithm that recovers the original approximation of $\frac{1}{2}$. Our algorithm works against even an almighty adversary who always chooses a worst-case ordering, rather than the standard offline adversary. The result also has implications for mechanism design -- there is much interest in designing competitive auctions with a finite number of samples from value distributions rather than full distributional knowledge.
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent …
Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of $p$ matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most $O(p)$, and this factor is also tight.
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be …
We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints (i.e., that the total allocation of each item should not exceed the supply). Our framework is general in the sense that it makes no direct assumption about agents' valuations, type distributions, or single agent constraints (e.g., budget, incentive compatibility, etc.). We present two generic multiagent mechanisms which use single agent mechanisms as black boxes. If an $\alpha$-approximate single agent mechanism is available for each agent, and assuming no agent ever demands more than $\frac{1}{k}$ of all units of each item, our generic multiagent mechanisms are $\gamma_{k}\alpha$-approximations of the optimal multiagent mechanism, where $\gamma_{k}$ is a constant which is at least $1-\frac{1}{\sqrt{k+3}}$. As a byproduct of our construction, we present a generalization of prophet inequalities where both gambler and prophet are allowed to pick $k$ numbers each to receive a reward equal to their sum. Finally, we use our framework to obtain multiagent mechanisms with improved approximation factor for several settings from the literature.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.2082 - 2095Chapter DOI:https://doi.org/10.1137/1.9781611975994.128PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both …
We study generalizations of the ``Prophet Inequality'' and ``Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both problems. This is close to the Omega(n/log n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log(n) log(r))-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(loglog(n) log(r)) factor) an open question posed to us by Bobby Kleinberg.
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold …
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms, and is used to derive new and improved results for combinatorial markets (with and without complements), multi-dimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is …
In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue.
In the ordinal matroid secretary problem (MSP), candidates do not reveal numerical weights, but the decision maker can still discern if a candidate is better than another. An algorithm [Formula: …
In the ordinal matroid secretary problem (MSP), candidates do not reveal numerical weights, but the decision maker can still discern if a candidate is better than another. An algorithm [Formula: see text] is probability-competitive if every element from the optimum appears with probability [Formula: see text] in the output. This measure is stronger than the standard utility competitiveness. Our main result is the introduction of a technique based on forbidden sets to design algorithms with strong probability-competitive ratios on many matroid classes. We improve upon the guarantees for almost every matroid class considered in the MSP literature. In particular, we achieve probability-competitive ratios of 4 for graphic matroids and of [Formula: see text] for laminar matroids. Additionally, we modify Kleinberg’s [Formula: see text] utility-competitive algorithm for uniform matroids of rank [Formula: see text] in order to obtain a [Formula: see text] probability-competitive algorithm. We also contribute algorithms for the ordinal MSP on arbitrary matroids.
For selling a single item to agents with independent but non-identically distributed values, the revenue optimal auction is complex. With respect to it, Hartline and Roughgarden (2009) showed that the …
For selling a single item to agents with independent but non-identically distributed values, the revenue optimal auction is complex. With respect to it, Hartline and Roughgarden (2009) showed that the approximation factor of the second-price auction with an anonymous reserve is between two and four. We consider the more demanding problem of approximating the revenue of the ex ante relaxation of the auction problem by posting an anonymous price (while supplies last) and prove that their worst-case ratio is e. As a corollary, the upper-bound of anonymous pricing or anonymous reserves versus the optimal auction improves from four to $e$. We conclude that, up to an $e$ factor, discrimination and simultaneity are unimportant for driving revenue in single-item auctions.
Optimal stopping theory is a powerful tool for analyzing scenarios such as online auctions in which we generally require optimizing an objective function over the space of stopping rules for …
Optimal stopping theory is a powerful tool for analyzing scenarios such as online auctions in which we generally require optimizing an objective function over the space of stopping rules for an allocation process under uncertainty. Perhaps the most classic problems of stopping theory are the prophet inequality problem and the secretary problem. The classical prophet inequality states that by choosing the same threshold OPT/2 for every step, one can achieve the tight competitive ratio of 0.5. On the other hand, for the basic secretary problem, the optimal strategy achieves the tight competitive ratio of 1/e. In this paper, we introduce Prophet Secretary, a natural combination of the prophet inequality and the secretary problems. An example motivation for our problem is as follows. Consider a seller that has an item to sell on the market to a set of arriving customers. The seller knows the types of customers that may be interested in the item and he has a price distribution for each type: the price offered by a customer of a type is anticipated to be drawn from the corresponding distribution. However, the customers arrive in a random order. Upon the arrival of a customer, the seller makes an irrevocable decision whether to sell the item at the offered price. We address the question of finding a strategy for selling the item at a high price. We show that by using a uniform threshold one cannot break the 0.5 barrier. However, we show that i) using n distinct non-adaptive thresholds one can obtain a competitive ratio that goes to (1-1/e) as n grows; and ii) no online algorithm can achieve a competitive ratio better than 0.75. Our results improve the (asymptotic) approximation guarantee of single-item sequential posted pricing mechanisms from 0.5 to (1-1/e) when the order of agents (customers) is chosen randomly.
We provide a polynomial time 4/3 approximation algorithm for TSP on metrics arising from the metric completion of cubic 3-edge connected graphs.
We provide a polynomial time 4/3 approximation algorithm for TSP on metrics arising from the metric completion of cubic 3-edge connected graphs.
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is …
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is drawn independently from an a-priori known probability distribution. Under edge arrival, the weight of each edge is revealed upon arrival, and the algorithm decides whether to include it in the matching or not. Under vertex arrival, the weights of all edges from the newly arriving vertex to all previously arrived vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. To study these settings, we introduce a novel unified framework of batched prophet inequalities that captures online settings where elements arrive in batches; in particular it captures matching under the two aforementioned arrival models. Our algorithms rely on the construction of suitable online contention resolution schemes (OCRS). We first extend the framework of OCRS to batched-OCRS, we then establish a reduction from batched prophet inequality to batched OCRS, and finally we construct batched OCRSs with selectable ratios of 0.337 and 0.5 for edge and vertex arrival models, respectively. Both results improve the state of the art for the corresponding settings. For vertex arrival, our result is tight. Interestingly, pricing-based prophet inequalities with comparable competitive ratios are unknown.
This paper is devoted, in the main, to proving the asymptotic minimax character of the sample distribution function (d.f.) for estimating an unknown d.f. in $\mathscr{F}$ or $\mathscr{F}_c$ (defined in …
This paper is devoted, in the main, to proving the asymptotic minimax character of the sample distribution function (d.f.) for estimating an unknown d.f. in $\mathscr{F}$ or $\mathscr{F}_c$ (defined in Section 1) for a wide variety of weight functions. Section 1 contains definitions and a discussion of measurability considerations. Lemma 2 of Section 2 is an essential tool in our proofs and seems to be of interest per se; for example, it implies the convergence of the moment generating function of $G_n$ to that of $G$ (definitions in (2.1)). In Section 3 the asymptotic minimax character is proved for a fundamental class of weight functions which are functions of the maximum deviation between estimating and true d.f. In Section 4 a device (of more general applicability in decision theory) is employed which yields the asymptotic minimax result for a wide class of weight functions of this character as a consequence of the results of Section 3 for weight functions of the fundamental class. In Section 5 the asymptotic minimax character is proved for a class of integrated weight functions. A more general class of weight functions for which the asymptotic minimax character holds is discussed in Section 6. This includes weight functions for which the risk function of the sample d.f. is not a constant over $\mathscr{F}_c.$ Most weight functions of practical interest are included in the considerations of Sections 3 to 6. Section 6 also includes a discussion of multinomial estimation problems for which the asymptotic minimax character of the classical estimator is contained in our results. Finally, Section 7 includes a general discussion of minimization of symmetric convex or monotone functionals of symmetric random elements, with special consideration of the "tied-down" Wiener process, and with a heuristic proof of the results of Sections 3, 4, 5, and much of Section 6.
A crucial problem in modern data science is data-driven algorithm design, where the goal is to choose the best algorithm, or algorithm parameters, for a specific application domain. In practice, …
A crucial problem in modern data science is data-driven algorithm design, where the goal is to choose the best algorithm, or algorithm parameters, for a specific application domain. In practice, we often optimize over a parametric algorithm family, searching for parameters with high performance on a collection of typical problem instances. While effective in practice, these procedures generally have not come with provable guarantees. A recent line of work initiated by a seminal paper of Gupta and Roughgarden (2017) analyzes application-specific algorithm selection from a theoretical perspective. We progress this research direction in several important settings. We provide upper and lower bounds on regret for algorithm selection in online settings, where problems arrive sequentially and we must choose parameters online. We also consider differentially private algorithm selection, where the goal is to find good parameters for a set of problems without divulging too much sensitive information contained therein. We analyze several important parameterized families of algorithms, including SDP-rounding schemes for problems formulated as integer quadratic programs as well as greedy techniques for several canonical subset selection problems. The cost function that measures an algorithm's performance is often a volatile piecewise Lipschitz function of its parameters, since a small change to the parameters can lead to a cascade of different decisions made by the algorithm. We present general techniques for optimizing the sum or average of piecewise Lipschitz functions when the underlying functions satisfy a sufficient and general condition called dispersion. Intuitively, a set of piecewise Lipschitz functions is dispersed if no small region contains many of the functions' discontinuities. Using dispersion, we improve over the best-known online learning regret bounds for a variety problems, prove regret bounds for problems not previously studied, and provide matching regret lower bounds. In the private optimization setting, we show how to optimize performance while preserving privacy for several important problems, providing matching upper and lower bounds on performance loss due to privacy preservation. Though algorithm selection is our primary motivation, we believe the notion of dispersion may be of independent interest. Therefore, we present our results for the more general problem of optimizing piecewise Lipschitz functions. Finally, we uncover dispersion in domains beyond algorithm selection, namely, auction design and pricing, providing online and privacy guarantees for these problems as well.
In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost …
In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost (disutility) of each player is the completion time of its own job. In the game, players may follow selfish strategies to optimize their cost and therefore their behaviors do not necessarily lead the game to an equilibrium. Even in the case there is an equilibrium, its makespan might be much larger than the social optimum, and this inefficiency is measured by the price of anarchy -- the worst ratio between the makespan of an equilibrium and the optimum. Coordination mechanisms aim to reduce the price of anarchy by designing scheduling policies that specify how jobs assigned to a same machine are to be scheduled. Typically these policies define the schedule according to the processing times as announced by the jobs. One could wonder if there are policies that do not require this knowledge, and still provide a good price of anarchy. This would make the processing times be private information and avoid the problem of truthfulness. In this paper we study these so-called non-clairvoyant policies. In particular, we study the RANDOM policy that schedules the jobs in a random order without preemption, and the EQUI policy that schedules the jobs in parallel using time-multiplexing, assigning each job an equal fraction of CPU time.
Algorithms for clustering points in metric spaces is a long-studied area of research. Clustering has seen a multitude of work both theoretically, in understanding the approximation guarantees possible for many …
Algorithms for clustering points in metric spaces is a long-studied area of research. Clustering has seen a multitude of work both theoretically, in understanding the approximation guarantees possible for many objective functions such as k-median and k-means clustering, and experimentally, in finding the fastest algorithms and seeding procedures for Lloyd's algorithm. The performance of a given clustering algorithm depends on the specific application at hand, and this may not be known up front. For example, a typical instance may vary depending on the application, and different clustering heuristics perform differently depending on the instance. In this paper, we define an infinite family of algorithms generalizing Lloyd's algorithm, with one parameter controlling the the initialization procedure, and another parameter controlling the local search procedure. This family of algorithms includes the celebrated k-means++ algorithm, as well as the classic farthest-first traversal algorithm. We design efficient learning algorithms which receive samples from an application-specific distribution over clustering instances and learn a near-optimal clustering algorithm from the class. We show the best parameters vary significantly across datasets such as MNIST, CIFAR, and mixtures of Gaussians. Our learned algorithms never perform worse than k-means++, and on some datasets we see significant improvements.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider …
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold …
We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms and is used to derive new and improved results for combinatorial markets (with and without complements), multidimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been …
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3/2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4/3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis Gharan et al., and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13/9 on the approximation factor, as well as a bound of 19/12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.
Click to increase image sizeClick to decrease image size Additional informationNotes on contributorsMan-Duen ChoiMan-Duen Choi was brought up in Hong Kong. He received his Ph.D. degree under the guidance of …
Click to increase image sizeClick to decrease image size Additional informationNotes on contributorsMan-Duen ChoiMan-Duen Choi was brought up in Hong Kong. He received his Ph.D. degree under the guidance of Chandler Davis at Toronto. He taught at the University of California, Berkeley, from 1973 to 1976, and has worked since then at the University of Toronto. His research has been mainly in operator algebras, operator theory, and polynomial rings. He is particularly interested in examples/counterexamples and two-by-two matrix manipulations.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2014 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Prophet Inequalities with Limited InformationPablo D. Azar, Robert Kleinberg, and S. Matthew WeinbergPablo D. …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2014 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Prophet Inequalities with Limited InformationPablo D. Azar, Robert Kleinberg, and S. Matthew WeinbergPablo D. Azar, Robert Kleinberg, and S. Matthew Weinbergpp.1358 - 1377Chapter DOI:https://doi.org/10.1137/1.9781611973402.100PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract In the classical prophet inequality, a gambler observes a sequence of stochastic rewards V1, …, Vn and must decide, for each reward Vi, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value Vi. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose k items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards V1, …, Vn are drawn. The assumption that the gambler knows the distribution from which V1, …, Vn are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, even with full knowledge of the distribution. Specifically, we provide a novel single-sample algorithm when the gambler can choose any k elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related secretary problem to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. In addition, we apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders. Connections between prophet inequalities and posted-price mechanisms are already known, but applying the existing framework requires knowledge of the underlying distributions, as well as the so-called "virtual values" even when the underlying prophet inequalities do not. We therefore provide an extension of this framework that bypasses virtual values altogether, allowing our mechanisms to take full advantage of the limited information required by our new prophet inequalities. Previous chapter Next chapter RelatedDetails Published:2014ISBN:978-1-61197-338-9eISBN:978-1-61197-340-2 https://doi.org/10.1137/1.9781611973402Book Series Name:ProceedingsBook Code:PRDA14Book Pages:viii + 1885
We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given …
We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.
Flows over time provide a natural and convenient description for the dynamics of a continuous stream of particles traveling from a source to a sink in a network, allowing to …
Flows over time provide a natural and convenient description for the dynamics of a continuous stream of particles traveling from a source to a sink in a network, allowing to track the progress of each infinitesimal particle along time. A basic model for the propagation of flow is the so-called fluid queue model in which the time to traverse an edge is composed of a flow-dependent waiting time in a queue at the entrance of the edge plus a constant travel time after leaving the queue. In a dynamic network routing game each infinitesimal particle is interpreted as a player that seeks to complete its journey in the least possible time. Players are forward looking and anticipate the congestion and queuing delays induced by others upon arrival to any edge in the network. Equilibrium occurs when each particle travels along a shortest path. This paper is concerned with the study of equilibria in the fluid queue model and provides a constructive proof of existence and uniqueness of equilibria in single origin-destination networks with piecewise constant inflow rate. This is done through a detailed analysis of the underlying static flows obtained as derivatives of a dynamic equilibrium. Furthermore, for multicommodity networks, we give a general nonconstructive proof of existence of equilibria when the inflow rates belong to L p .
This paper attaches a name to a natural class of combinatorial problems and points out that the class includes many important special cases. One special case, a simple problem of …
This paper attaches a name to a natural class of combinatorial problems and points out that the class includes many important special cases. One special case, a simple problem of placing nonoverlapping labels on a rectangular map, is shown to be NP-complete.
A method of improved efficiency is given for updating the mean and variance of weighted sampled data when an additional data value is included in the set. Evidence is presented …
A method of improved efficiency is given for updating the mean and variance of weighted sampled data when an additional data value is included in the set. Evidence is presented that the method is stable and at least as accurate as the best existing updating method.
We consider the problem of maximizing the revenue raised from tolls set on the arcs of a transportation network, under the constraint that users are assigned to toll-compatible shortest paths. …
We consider the problem of maximizing the revenue raised from tolls set on the arcs of a transportation network, under the constraint that users are assigned to toll-compatible shortest paths. We first prove that this problem is strongly NP-hard. We then provide a polynomial time algorithm with a worst-case precision guarantee of ${1/2}\log_2 m_T+1$, where $m_T$ denotes the number of toll arcs. Finally we show that the approximation is tight with respect to a natural relaxation by constructing a family of instances for which the relaxation gap is reached.
In the Maximum Weight Independent Set of Rectangles (MWISR) problem we are given a set of n axis-parallel rectangles in the 2D-plane, and the goal is to select a maximum …
In the Maximum Weight Independent Set of Rectangles (MWISR) problem we are given a set of n axis-parallel rectangles in the 2D-plane, and the goal is to select a maximum weight subset of pairwise non-overlapping rectangles. Due to many applications, e.g. in data mining, map labeling and admission control, the problem has received a lot of attention by various research communities. We present the first (1 + ε)-approximation algorithm for the MWISR problem with quasipolynomial running time 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">poly(log n/ε)</sup> . In contrast, the best known polynomial time approximation algorithms for the problem achieve superconstant approximation ratios of O(log log n) (unweighted case) and O(log n/log log n) (weighted case). Key to our results is a new geometric dynamic program which recursively subdivides the plane into polygons of bounded complexity. We provide the technical tools that are needed to analyze its performance. In particular, we present a method of partitioning the plane into small and simple areas such that the rectangles of an optimal solution are intersected in a very controlled manner. Together with a novel application of the weighted planar graph separator theorem due to Arora et al. [4] this allows us to upper bound our approximation ratio by 1 + ε. Our dynamic program is very general and we believe that it will be useful for other settings. In particular, we show that, when parametrized properly, it provides a polynomial time (1 + ε)-approximation for the special case of the MWISR problem when each rectangle is relatively large in at least one dimension. Key to this analysis is a method to tile the plane in order to approximately describe the topology of these rectangles in an optimal solution. This technique might be a useful insight to design better polynomial time approximation algorithms or even a PTAS for the MWISR problem. In particular, note that our results imply that the MWISR problem is not APX-hard, unless NP ⊆ DTIME(2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">polylog (n)</sup> ).
We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main …
We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O ( n (1 + log( N/n ))) expected time, which is optimum, up to a constant factor. Several optimizations are studied that collectively improve the speed of the naive version of the algorithm by an order of magnitude. We give an efficient Pascal-like implementation that incorporates these modifications and that is suitable for general use. Theoretical and empirical results indicate that Algorithm Z outperforms current methods by a significant margin.
Let $S$ be a point set in the plane such that each of its elements is colored either red or blue. A matching of $S$ with rectangles is any set …
Let $S$ be a point set in the plane such that each of its elements is colored either red or blue. A matching of $S$ with rectangles is any set of pairwise-disjoint axis-aligned rectangles such that each rectangle contains exactly two points of $S$. Such a matching is monochromatic if every rectangle contains points of the same color, and is bichromatic if every rectangle contains points of different colors. In this paper we study the following two problems: 1. Find a maximum monochromatic matching of $S$ with rectangles. 2. Find a maximum bichromatic matching of $S$ with rectangles. For each problem we provide a polynomial-time approximation algorithm that constructs a matching with at least $1/4$ of the number of rectangles of an optimal matching. We show that the first problem is $\mathsf{NP}$-hard even if either the matching rectangles are restricted to axis-aligned segments or $S$ is in general position, that is, no two points of $S$ share the same $x$ or $y$ coordinate. We further show that the second problem is also $\mathsf{NP}$-hard, even if $S$ is in general position. These $\mathsf{NP}$-hardness results follow by showing that deciding the existence of a perfect matching is $\mathsf{NP}$-complete in each case. The approximation results are based on a relation of our problem with the problem of finding a maximum independent set in a family of axis-aligned rectangles. With this paper we extend previous ones on matching one-colored points with rectangles and squares, and matching two-colored points with segments. Furthermore, using our techniques, we prove that it is $\mathsf{NP}$-complete to decide a perfect matching with rectangles in the case where all points have the same color, solving an open problem of Bereg, Mutsanas, and Wolff [CGTA (2009)].
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 …
The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single-dimensional preferences, BR89 show that the optimal auction of M81 is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implement the marginal revenue maximization mechanism. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for well-studied single-dimensional agents, our generalization immediately extends many approximation results for single-dimensional agents to more general preferences. Finally, one of the biggest open questions in Bayesian algorithmic mechanism design is in developing methodologies that are not brute-force in size of the agent type space (usually exponential in the dimension for multi-dimensional agents). Our methods identify a sub problem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.