Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (11)

We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem … We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem with several advantages and appealing properties. Its analysis in the random-order model is one of the cornerstones of random-order analysis beyond the secretary problem. Meyerson's algorithm was shown to be (asymptotically) optimal in the standard worst-case adversarial-order model and 8-competitive in the random order model. While this bound in the random-order model is the long-standing state-of-the-art, it is not known to be tight, and the true competitive-ratio of Meyerson's algorithm remained an open question for more than two decades.We resolve this question and prove tight bounds on the competitive-ratio of Meyerson's algorithm in the random-order model, showing that it is exactly 4-competitive. Following our tight analysis, we introduce a generic parameterized version of Meyerson's algorithm that retains all the advantages of the original version. We show that the best algorithm in this family is exactly 3-competitive. On the other hand, we show that no online algorithm for this problem can achieve a competitive-ratio better than 2. Finally, we prove that the algorithms in this family are robust to partial adversarial arrival orders.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.1247 - 1272Chapter DOI:https://doi.org/10.1137/1.9781611977073.52PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pál for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem … We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem with several advantages and appealing properties. Its analysis in the random-order model is one of the cornerstones of random-order analysis beyond the secretary problem. Meyerson's algorithm was shown to be (asymptotically) optimal in the standard worst-case adversarial-order model and $8$-competitive in the random order model. While this bound in the random-order model is the long-standing state-of-the-art, it is not known to be tight, and the true competitive-ratio of Meyerson's algorithm remained an open question for more than two decades. We resolve this question and prove tight bounds on the competitive-ratio of Meyerson's algorithm in the random-order model, showing that it is exactly $4$-competitive. Following our tight analysis, we introduce a generic parameterized version of Meyerson's algorithm that retains all the advantages of the original version. We show that the best algorithm in this family is exactly $3$-competitive. On the other hand, we show that no online algorithm for this problem can achieve a competitive-ratio better than $2$. Finally, we prove that the algorithms in this family are robust to partial adversarial arrival orders.
We study the classical online bipartite matching problem: One side of the graph is known and vertices of the other side arrive online. It is well known that when the … We study the classical online bipartite matching problem: One side of the graph is known and vertices of the other side arrive online. It is well known that when the graph is edge-weighted, and vertices arrive in an adversarial order, no online algorithm has a nontrivial competitive-ratio. To bypass this hurdle we modify the rules such that the adversary still picks the graph but has to reveal a random part (say half) of it to the player. The remaining part is given to the player in an adversarial order. This models practical scenarios in which the online algorithm has some history to learn from. This way of modeling a history was formalized recently by the authors (SODA 20) and was called the AOS model (for Adversarial Online with a Sample). It allows developing online algorithms for the secretary problem that compete even when the secretaries arrive in an adversarial order. Here we use the same model to attack the much more challenging matching problem. We analyze a natural algorithmic framework that decides how to match an arriving vertex $v$ by applying an offline matching algorithm to $v$ and the sample. We get roughly $1/4$ of the maximum weight by applying the offline greedy matching algorithm to the sample and $v$. Our analysis ties the performance of this algorithm to the performance of the offline greedy matching on the online part and we also prove that it is tight. Surprisingly, when replacing greedy with an optimal algorithm for maximum matching, no constant competitive-ratio can be guaranteed when the size of the sample is comparable to the size of the online part. However, when the sample is quadratic in the size of the online part, we do get a competitive-ratio of $1/e$.
We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a … We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pal for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.
We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a … We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and P\'al for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.
We present a simple unsupervised approach for answer identification in organizational group chat. In recent years, organizational group chat is on the rise enabling asynchronous text-based collaboration between co-workers in … We present a simple unsupervised approach for answer identification in organizational group chat. In recent years, organizational group chat is on the rise enabling asynchronous text-based collaboration between co-workers in different locations and time zones. Finding answers to questions is often critical for work efficiency. However, group chat is characterized by intertwined conversations and 'always on' availability, making it hard for users to pinpoint answers to questions they care about in real-time or search for answers in retrospective. In addition, structural and lexical characteristics differ between chat groups, making it hard to find a 'one model fits all' approach. Our Kernel Density Estimation (KDE) based clustering approach termed Ans-Chat implicitly learns discussion patterns as a means for answer identification, thus eliminating the need to channel-specific tagging. Empirical evaluation shows that this solution outperforms other approached.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.2082 - 2095Chapter DOI:https://doi.org/10.1137/1.9781611975994.128PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample … We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models.
We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to … We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the $d$-dimensional generalized assignment problem, where we introduce a novel technique that achieves an $O(d)$-competitive algorithms and prove a matching lower bound of $Ω(d)$. Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem.
We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample … We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models.

Common Coauthors

Commonly Cited References

A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
We study packing linear programs (LPs) in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies … We study packing linear programs (LPs) in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies motivated, e.g., by online ad allocations and yield management, where rows correspond to resources and columns to requests specifying demands for resources. Our main contribution is a $1-O(\sqrt{\nicefrac{(\log d)}{B}})$-competitive online algorithm. Here $d$ denotes the column sparsity, i.e., the maximum number of resources that occur in a single column, and $B$ denotes the capacity ratio $B$, i.e., the ratio between the capacity of a resource and the maximum demand for this resource. In other words, we achieve a $(1-\epsilon)$-approximation if the capacity ratio satisfies $B=\Omega(\frac{\log d}{\epsilon^2})$, which is known to be the best possible for any (randomized) online algorithms. Our result improves exponentially on previous work with respect to the capacity ratio. In contrast to existing results on packing LP problems, our algorithm does not use dual prices to guide the allocation of resources over time. Instead, the algorithm simply solves, for each request, a scaled version of the partially known primal program and randomly rounds the obtained fractional solution to obtain an integral allocation for this request. We show that this simple algorithmic technique is not restricted to packing LPs with large capacity ratio of order $\Omega(\log d)$, but also yields close-to-optimal competitive ratios if the capacity ratio is bounded by a constant. In particular, we prove an upper bound on the competitive ratio of $\Omega(d^{\nicefrac{-1}{(B-1)}})$ for any $B\geq2$. In addition, we show that our approach can be combined with VCG payments and obtain an incentive-compatible $(1-\epsilon)$-competitive mechanism for packing LPs with $B=\Omega(\frac{\log m}{\epsilon^2})$, where $m$ is the number of constraints. Finally, we apply our technique to the generalized assignment problem for which we obtain the first online algorithm with competitive ratio $O(1)$.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.2082 - 2095Chapter DOI:https://doi.org/10.1137/1.9781611975994.128PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1−1/e)-competitive algorithm in their seminal work [10].Our main result is an optimal (1−1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets.
We study packing LPs in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies motivated, e.g., … We study packing LPs in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies motivated, e.g., by online ad allocations and yield management where rows correspond to resources and columns to requests specifying demands for resources. Our main contribution is a 1 -- O(√(log d/B))-competitive online algorithm. Here d denotes the column sparsity, i.e., the maximum number of resources that occur in a single column, and B denotes the capacity ratio B, i.e., the ratio between the capacity of a resource and the maximum demand for this resource. In other words, we achieve a (1--ε)-approximation if the capacity ratio satisfies B=Ω(logd/ε2), which is known to be best-possible for any (randomized) online algorithms.
Web discussion forums are used by millions of people worldwide to share information belonging to a variety of domains such as automotive vehicles, pets, sports, etc. They typically contain posts … Web discussion forums are used by millions of people worldwide to share information belonging to a variety of domains such as automotive vehicles, pets, sports, etc. They typically contain posts that fall into different categories such as problem, solution, feedback, spam, etc. Automatic identification of these categories can aid information retrieval that is tailored for specific user requirements. Previously, a number of supervised methods have attempted to solve this problem; however, these depend on the availability of abundant training data. A few existing unsupervised and semi-supervised approaches are either focused on identifying a single category or do not report category-specific performance. In contrast, this work proposes unsupervised and semi-supervised methods that require no or minimal training data to achieve this objective without compromising on performance. A fine-grained analysis is also carried out to discuss their limitations. The proposed methods are based on sequence models (specifically, Hidden Markov Models) that can model language for each category using word and part-of-speech probability distributions, and manually specified features. Empirical evaluations across domains demonstrate that the proposed methods are better suited for this task than existing ones.
The need for deeply understanding when algorithms work (or not) has never been greater. The need for deeply understanding when algorithms work (or not) has never been greater.
A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column … A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column along with the corresponding objective coefficient. In such a model, a decision variable has to be set each time a column is revealed without observing the future inputs and the goal is to maximize the overall objective function. In this paper, we provide a near-optimal algorithm for this general class of online problems under the assumption of random order of arrival and some mild conditions on the size of the LP right-hand-side input. Specifically, our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from the revealed columns in the previous period are used to determine the sequential decisions in the current period. Due to the feature of dynamic learning, the competitiveness of our algorithm improves over the past study of the same problem. We also present a worst-case example showing that the performance of our algorithm is near-optimal.
In this paper we study three previously unstudied variants of the online Facility Location problem, considering an intrinsic scenario when the clients and facilities are not only allowed to arrive … In this paper we study three previously unstudied variants of the online Facility Location problem, considering an intrinsic scenario when the clients and facilities are not only allowed to arrive to the system, but they can also depart at any moment. We begin with the study of a natural fully-dynamic online uncapacitated model where clients can be both added and removed. When a client arrives, then it has to be assigned either to an existing facility or to a new facility opened at the client's location. However, when a client who has been also one of the open facilities is to be removed, then our model has to allow to reconnect all clients that have been connected to that removed facility. In this model, we present an optimal O(log n_act / log log n_act)-competitive algorithm, where n_act is the number of active clients at the end of the input sequence. Next, we turn our attention to the capacitated Facility Location problem. We first note that if no deletions are allowed, then one can achieve an optimal competitive ratio of O(log n/ log log n), where n is the length of the sequence. However, when deletions are allowed, the capacitated version of the problem is significantly more challenging than the uncapacitated one. We show that still, using a more sophisticated algorithmic approach, one can obtain an online O(log m + log c log n)-competitive algorithm for the capacitated Facility Location problem in the fully dynamic model, where m is number of points in the input metric and c is the capacity of any open facility.
We consider the problem of online load balancing under ℓp-norms: sequential jobs need to be assigned to one of the machines and the goal is to minimize the ℓp-norm of … We consider the problem of online load balancing under ℓp-norms: sequential jobs need to be assigned to one of the machines and the goal is to minimize the ℓp-norm of the machine loads. This generalizes the classical problem of scheduling for makespan minimization (case and has been thoroughly studied. However, despite the recent push for beyond worst-case analyses, no such results are known for this problem.In this paper we provide algorithms with simultaneous guarantees for the worst-case model as well as for the random-order (i.e. secretary) model, where an arbitrary set of jobs comes in random order. First, we show that the greedy algorithm (with restart), known to have optimal O(p) worst-case guarantee, also has a (typically) improved random-order guarantee. However, the behavior of this algorithm in the random-order model degrades with p. We then propose algorithm SimultaneousLB that has simultaneously optimal guarantees (within constants) in both worst-case and random- order models. In particular, the random-order guarantee of SimultaneousLB improves as p increases.One of the main components is a new algorithm with improved regret for Online Linear Optimization (OLO) over the non-negative vectors in the ℓq ball. Interestingly, this OLO algorithm is also used to prove a purely probabilistic inequality that controls the correlations arising in the random-order model, a common source of difficulty for the analysis. Another important component used in both SimultaneousLB and our OLO algorithm is a smoothing of the ℓp-norm that may be of independent interest. This smoothness property allows us to see algorithm SimultaneousLB as essentially a greedy one in the worst-case model and as a primal-dual one in the random-order model, which is instrumental for its simultaneous guarantees.
We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to … We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the $d$-dimensional generalized assignment problem, where we introduce a novel technique that achieves an $O(d)$-competitive algorithms and prove a matching lower bound of $Ω(d)$. Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem.
The secretary problem became one of the most prominent online selection problems due to its numerous applications in online mechanism design. The task is to select a maximum weight subset … The secretary problem became one of the most prominent online selection problems due to its numerous applications in online mechanism design. The task is to select a maximum weight subset of elements subject to given constraints, where elements arrive one-by-one in random order, revealing a weight upon arrival. The decision whether to select an element has to be taken immediately after its arrival. The different applications that map to the secretary problem ask for different constraint families to be handled. The most prominent ones are matroid constraints, which both capture many relevant settings and admit strongly competitive secretary algorithms. However, dealing with more involved constraints proved to be much more difficult, and strong algorithms are known only for a few specific settings. In this paper, we present a general framework for dealing with the secretary problem over the intersection of several matroids. This framework allows us to combine and exploit the large set of matroid secretary algorithms known in the literature. As one consequence, we get constant-competitive secretary algorithms over the intersection of any constant number of matroids whose corresponding (single-)matroid secretary problems are currently known to have a constant-competitive algorithm. Moreover, we show that our results extend to submodular objectives.MSC codesmatroid secretary problemmatroid intersectiononline algorithms
We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. … We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors' arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/e-competitive in our model even for bipartite graphs.
Huang et al. (STOC 2018) introduced the fully online matching problem, a generalization of the classic online bipartite matching problem in that it allows all vertices to arrive online and … Huang et al. (STOC 2018) introduced the fully online matching problem, a generalization of the classic online bipartite matching problem in that it allows all vertices to arrive online and considers general graphs. They showed that the ranking algorithm by Karp et al. (STOC 1990) is strictly better than 0.5-competitive and the problem is strictly harder than the online bipartite matching problem in that no algorithms can be (1 --- 1/e)-competitive.This paper pins down two tight competitive ratios of classic algorithms for the fully online matching problem. For the fractional version of the problem, we show that a natural instantiation of the water-filling algorithm is [MATH HERE]-competitive, together with a matching hardness result. Interestingly, our hardness result applies to arbitrary algorithms in the edge-arrival models of the online matching problem, improving the state-of-art [MATH HERE] upper bound. For integral algorithms, we show a tight competitive ratio of ≈ 0.567 for the ranking algorithm on bipartite graphs, matching a hardness result by Huang et al. (STOC 2018).
The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only … The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only on one side, and presented optimal deterministic and randomized algorithms for this setting. In comparison, more general arrival models, such as edge arrivals and general vertex arrivals, have proven more challenging and positive results are known only for various relaxations of the problem. In particular, even the basic question of whether randomization allows one to beat the trivially-optimal deterministic competitive ratio of 1/2 for either of these models was open. In this paper, we resolve this question for both these natural arrival models, and show the following.1) For edge arrivals, randomization does not help - no randomized algorithm is better than 1/2 competitive. 2)For general vertex arrivals, randomization helps - there exists a randomized (1/2+Ω(1)) -competitive online matching algorithm.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider … The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While it avoids the pessimism of the traditional adversarial model, in practice we … The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While it avoids the pessimism of the traditional adversarial model, in practice we cannot expect the input to be presented in perfectly random order. This has motivated research on ``best of both worlds'' (algorithms with good performance on both purely stochastic and purely adversarial inputs), or even better, on inputs that are a mix of both stochastic and adversarial parts. Unfortunately the latter seems much harder to achieve and very few results of this type are known. Towards advancing our understanding of designing such robust algorithms, we propose a random-order model with bursts of adversarial time steps. The assumption of burstiness of unexpected patterns is reasonable in many contexts, since changes (e.g. spike in a demand for a good) are often triggered by a common external event. We then consider the Knapsack Secretary problem in this model: there is a knapsack of size $k$ (e.g., available quantity of a good), and in each of the $n$ time steps an item comes with its value and size in $[0,1]$ and the algorithm needs to make an irrevocable decision whether to accept or reject the item. We design an algorithm that gives an approximation of $1 - \tilde{O}(Γ/k)$ when the adversarial time steps can be covered by $Γ\ge \sqrt{k}$ intervals of size $\tilde{O}(\frac{n}{k})$. In particular, setting $Γ= \sqrt{k}$ gives a $(1 - O(\frac{\ln^2 k}{\sqrt{k}}))$-approximation that is resistant to up to a $\frac{\ln^2 k}{\sqrt{k}}$-fraction of the items being adversarial, which is almost optimal even in the absence of adversarial items. Also, setting $Γ= \tildeΩ(k)$ gives a constant approximation that is resistant to up to a constant fraction of items being adversarial.
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an … A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an $α$ fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional $βn$ samples for some $β\geq 0$. We first give improved lower bounds on $α$ for a wide range of values of $β$; specifically, $α\geq(1+β)/e$ when $β\leq 1/(e-1)$, which is tight, and $α\geq 0.648$ when $β=1$, which improves on a bound of around $0.635$ due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of $1/e$ for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
We provide online algorithms for secretary matching in general weighted graphs, under the well-studied models of vertex and edge arrivals. In both models, edges are associated with arbitrary weights that … We provide online algorithms for secretary matching in general weighted graphs, under the well-studied models of vertex and edge arrivals. In both models, edges are associated with arbitrary weights that are unknown from the outset, and are revealed online. Under vertex arrival, vertices arrive online in a uniformly random order; upon the arrival of a vertex $v$, the weights of edges from $v$ to all previously arriving vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. Under edge arrival, edges arrive online in a uniformly random order; upon the arrival of an edge $e$, its weight is revealed, and the algorithm decides whether to include it in the matching or not. We provide a $5/12$-competitive algorithm for vertex arrival, and show it is tight. For edge arrival, we provide a $1/4$-competitive algorithm. Both results improve upon state of the art bounds for the corresponding settings. Interestingly, for vertex arrival, secretary matching in general graphs outperforms secretary matching in bipartite graphs with 1-sided arrival, where $1/e$ is the best possible guarantee.
Nearly thirty years ago, Bar-Noy, Motwani and Naor [IPL'92] conjectured that an online (1 + o(1))Δ-edge-coloring algorithm exists for n-node graphs of maximum degree Δ = ω(log n). This conjecture … Nearly thirty years ago, Bar-Noy, Motwani and Naor [IPL'92] conjectured that an online (1 + o(1))Δ-edge-coloring algorithm exists for n-node graphs of maximum degree Δ = ω(log n). This conjecture remains open in general, though it was recently proven for bipartite graphs under one-sided vertex arrivals by Cohen et al. [FOCS'19]. In a similar vein, we study edge coloring under widely-studied relaxations of the online model.Our main result is in the random-order online model. For this model, known results fall short of the Bar-Noy et al. conjecture, either in the degree bound [Aggarwal et al. FOCS'03], or number of colors used [Bahmani et al. SODA'10]. We achieve the best of both worlds, thus resolving the Bar-Noy et al. conjecture in the affirmative for this model.Our second result is in the adversarial online (and dynamic) model with recourse. A recent algorithm of Duan et al. [SODA'19] yields a (1 + ∊) Δ-edge-coloring with poly(log n/∊) recourse. We achieve the same with poly(1/∊) recourse, thus removing all dependence on n.Underlying our results is one common offline algorithm, which we show how to implement in these two online models. Our algorithm, based on the Rödl Nibble Method, is an adaptation of the distributed algorithm of Dubhashi et al. [TCS'98]. The Nibble Method has proven successful for distributed edge coloring. We display its usefulness in the context of online algorithms.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is … In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. Here, the optimal algorithm picks the maximum value with probability at least 1/e. However, by varying the available information, new interesting problems arise. For instance, in the full information variant of the secretary problem the values are i.i.d. samples from a known distribution. Naturally, the best possible success probability increases and turns out to be approximately 0.58. Also, the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature.In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability p. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with p-sampling (ROSp for short) and the adversarial order secretary problem with p-sampling (AOSp for short). Our main result is to obtain best possible algorithms for both problems and all values of p. As p grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of p only. Therefore, even knowledge of the total number of elements is unnecessary. Proving that this algorithm is optimal involves a novel technique, which boils down to analyzing a related game in a conflict graph over binary sequences. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements. Surprisingly, this sequence of time thresholds arises from a separable and convex optimization problem whose solution is independent of p.
Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for the unweighted bipartite matching … Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for the unweighted bipartite matching that achieves an optimal competitive ratio of 1- <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> /e. Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial 1/ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> -competitive greedy algorithm. In this paper, we present the first online algorithm that breaks the long-standing 1/ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> barrier and achieves a competitive ratio of at least 0.5086. In light of the hardness result of Kapralov, Post, and Vondrák (SODA 2013) that restricts beating a 1/ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> competitive ratio for the more general problem of monotone submodular welfare maximization, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in the online setting. The main ingredient in our online matching algorithm is a novel subroutine called online correlated selection (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.
The study of the prophet inequality problem in the limited information regime was initiated by Azar et al. [SODA'14] in the pursuit of prior-independent posted-price mechanisms. As they show, $O(1)$-competitive … The study of the prophet inequality problem in the limited information regime was initiated by Azar et al. [SODA'14] in the pursuit of prior-independent posted-price mechanisms. As they show, $O(1)$-competitive policies are achievable using only a single sample from the distribution of each agent. A notable portion of their results relies on reducing the design of single-sample prophet inequalities (SSPIs) to that of order-oblivious secretary (OOS) policies. The above reduction comes at the cost of not fully utilizing the available samples. However, to date, this is essentially the only method for proving SSPIs for many combinatorial sets. Very recently, Rubinstein et al. [ITCS'20] give a surprisingly simple algorithm which achieves the optimal competitive ratio for the single-choice SSPI problem $-$ a result which is unobtainable going through the reduction to secretary problems. Motivated by this discrepancy, we study the competitiveness of simple SSPI policies directly, without appealing to results from OOS literature. In this direction, we first develop a framework for analyzing policies against a greedy-like prophet solution. Using this framework, we obtain the first SSPI for general (non-bipartite) matching environments, as well as improved competitive ratios for transversal and truncated partition matroids. Second, motivated by the observation that many OOS policies for matroids decompose the problem into independent rank-$1$ instances, we provide a meta-theorem which applies to any matroid satisfying this partition property. Leveraging the recent results by Rubinstein et al., we obtain improved competitive guarantees (most by a factor of $2$) for a number of matroids captured by the reduction of Azar et al. Finally, we discuss applications of our SSPIs to the design of mechanisms for multi-dimensional limited information settings with improved revenue and welfare guarantees.
We provide nearly optimal algorithms for online facility location (OFL) with predictions. In OFL, $n$ demand points arrive in order and the algorithm must irrevocably assign each demand point to … We provide nearly optimal algorithms for online facility location (OFL) with predictions. In OFL, $n$ demand points arrive in order and the algorithm must irrevocably assign each demand point to an open facility upon its arrival. The objective is to minimize the total connection costs from demand points to assigned facilities plus the facility opening cost. We further assume the algorithm is additionally given for each demand point $x_i$ a natural prediction $f_{x_i}^{\mathrm{pred}}$ which is supposed to be the facility $f_{x_i}^{\mathrm{opt}}$ that serves $x_i$ in the offline optimal solution. Our main result is an $O(\min\{\log {\frac{n\eta_\infty}{\mathrm{OPT}}}, \log{n} \})$-competitive algorithm where $\eta_\infty$ is the maximum prediction error (i.e., the distance between $f_{x_i}^{\mathrm{pred}}$ and $f_{x_i}^{\mathrm{opt}}$). Our algorithm overcomes the fundamental $\Omega(\frac{\log n}{\log \log n})$ lower bound of OFL (without predictions) when $\eta_\infty$ is small, and it still maintains $O(\log n)$ ratio even when $\eta_\infty$ is unbounded. Furthermore, our theoretical analysis is supported by empirical evaluations for the tradeoffs between $\eta_\infty$ and the competitive ratio on various real datasets of different types.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.1247 - 1272Chapter DOI:https://doi.org/10.1137/1.9781611977073.52PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pál for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Graph Algorithms with PredictionsYossi Azar, Debmalya Panigrahi, and Noam TouitouYossi Azar, Debmalya Panigrahi, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Graph Algorithms with PredictionsYossi Azar, Debmalya Panigrahi, and Noam TouitouYossi Azar, Debmalya Panigrahi, and Noam Touitoupp.35 - 66Chapter DOI:https://doi.org/10.1137/1.9781611977073.3PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Online algorithms with predictions is a popular and elegant framework for bypassing pessimistic lower bounds in competitive analysis. In this model, online algorithms are supplied with future predictions and the goal is for the competitive ratio to smoothly interpolate between the best offline and online bounds as a function of the prediction error. In this paper, we study online graph problems with predictions. Our contributions are the following: The first question is defining prediction error. For graph/metric problems, there can be two types of error, locations that are not predicted, and locations that are predicted but the predicted and actual locations do not coincide exactly. We design a novel definition of prediction error called metric error with outliers to simultaneously capture both types of errors, which thereby generalizes previous definitions of error that only capture one of the two error types. We give a general framework for obtaining online algorithms with predictions that combines, in a "black box" fashion, existing online and offline algorithms, under certain technical conditions. To the best of our knowledge, this is the first general-purpose tool for obtaining online algorithms with predictions. Using our framework, we obtain tight bounds on the competitive ratio of several classical graph problems as a function of metric error with outliers: Steiner tree, Steiner forest, priority Steiner tree/forest, and uncapacitated/capacitated facility location. Both the definition of metric error with outliers and the general framework for combining offline and online algorithms are not specific to the problems that we consider in this paper. We hope that these will be useful for future work on other problems in this domain. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Robust Secretary and Prophet Algorithms for Packing Integer ProgramsC.J. Argue, Anupam Gupta, Marco Molinaro, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Robust Secretary and Prophet Algorithms for Packing Integer ProgramsC.J. Argue, Anupam Gupta, Marco Molinaro, and Sahil SinglaC.J. Argue, Anupam Gupta, Marco Molinaro, and Sahil Singlapp.1273 - 1297Chapter DOI:https://doi.org/10.1137/1.9781611977073.53PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the problem of solving Packing Integer Programs (PIPs) in the online setting, where columns in [0, 1]d of the constraint matrix are revealed sequentially, and the goal is to pick a subset of the columns that sum to at most B in each coordinate while maximizing the objective. Excellent results are known in the secretary setting, where the columns are adversarially chosen, but presented in a uniformly random order. However, these existing algorithms are susceptible to adversarial attacks: they try to "learn" characteristics of a good solution, but tend to over-fit to the model, and hence a small number of adversarial corruptions can cause the algorithm to fail. In this paper, we give the first robust algorithms for Packing Integer Programs, specifically in the recently proposed Byzantine Secretary framework [BGSZ20]. Our techniques are based on a two-level use of online learning, to robustly learn an approximation to the optimal value, and then to use this robust estimate to pick a good solution. These techniques are general and we use them to design robust algorithms for PIPs in the prophet model as well, specifically in the Prophet-with-Augmentations framework [ISW20]. We also improve known results in the Byzantine Secretary framework: we make the non-constructive results algorithmic and improve the existing bounds for single-item and matroid constraints. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Following the research agenda initiated by Munoz & Vassilvitskii [1] and Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for classical online optimization problems, in this work, we consider the … Following the research agenda initiated by Munoz & Vassilvitskii [1] and Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for classical online optimization problems, in this work, we consider the Online Facility Location problem under this framework. In Online Facility Location (OFL), demands arrive one-by-one in a metric space and must be (irrevocably) assigned to an open facility upon arrival, without any knowledge about future demands. We present an online algorithm for OFL that exploits potentially imperfect predictions on the locations of the optimal facilities. We prove that the competitive ratio decreases smoothly from sublogarithmic in the number of demands to constant, as the error, i.e., the total distance of the predicted locations to the optimal facility locations, decreases towards zero. We complement our analysis with a matching lower bound establishing that the dependence of the algorithm's competitive ratio on the error is optimal, up to constant factors. Finally, we evaluate our algorithm on real world data and compare our learning augmented approach with the current best online algorithm for the problem.
Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and … Implicitly defined (and easily approximated) universal constants $1.1 < a_n < 1.6, n = 2,3, \cdots$, are found so that if $X_1, X_2, \cdots$ are i.i.d. non-negative random variables and if $T_n$ is the set of stop rules for $X_1, \cdots, X_n$, then $E(\max\{X_1, \cdots, X_n\}) \leq a_n \sup\{EX_t: t \in T_n\}$, and the bound $a_n$ is best possible. Similar universal constants $0 < b_n < \frac{1}{4}$ are found so that if the $\{X_i\}$ are i.i.d. random variables taking values only in $\lbrack a, b\rbrack$, then $E(\max\{X_1, \cdots, X_n\}) \leq \sup\{EX_t: t \in T_n\} + b_n(b - a)$, where again the bound $b_n$ is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.
For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with … For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order.
Summary A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence … Summary A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis.