Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (25)

Mobile edge computing offers a myriad of opportunities to innovate and introduce novel applications, thereby enhancing user experiences considerably. A critical issue extensively investigated in this domain is efficient deployment … Mobile edge computing offers a myriad of opportunities to innovate and introduce novel applications, thereby enhancing user experiences considerably. A critical issue extensively investigated in this domain is efficient deployment of Service Function Chains (SFCs) across the physical network, spanning from the edge to the cloud. This problem is known to be NP-hard. As a result of its practical importance, there is significant interest in the development of high-quality sub-optimal solutions. In this paper, we consider this problem and propose a novel near-optimal heuristic that is extremely efficient and scalable. We compare our solution to the state-of-the-art heuristic and to the theoretical optimum. In our large-scale evaluations, we use realistic topologies which were previously reported in the literature. We demonstrate that the execution time offered by our solution grows slowly as the number of Virtual Network Function (VNF) forwarding graph embedding requests grows, and it handles one million requests in slightly more than 20 seconds for 100 nodes and 150 edges physical topology.
We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem … We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem with several advantages and appealing properties. Its analysis in the random-order model is one of the cornerstones of random-order analysis beyond the secretary problem. Meyerson's algorithm was shown to be (asymptotically) optimal in the standard worst-case adversarial-order model and 8-competitive in the random order model. While this bound in the random-order model is the long-standing state-of-the-art, it is not known to be tight, and the true competitive-ratio of Meyerson's algorithm remained an open question for more than two decades.We resolve this question and prove tight bounds on the competitive-ratio of Meyerson's algorithm in the random-order model, showing that it is exactly 4-competitive. Following our tight analysis, we introduce a generic parameterized version of Meyerson's algorithm that retains all the advantages of the original version. We show that the best algorithm in this family is exactly 3-competitive. On the other hand, we show that no online algorithm for this problem can achieve a competitive-ratio better than 2. Finally, we prove that the algorithms in this family are robust to partial adversarial arrival orders.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.1247 - 1272Chapter DOI:https://doi.org/10.1137/1.9781611977073.52PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pál for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem … We study the online facility location problem with uniform facility costs in the random-order model. Meyerson's algorithm [FOCS'01] is arguably the most natural and simple online algorithm for the problem with several advantages and appealing properties. Its analysis in the random-order model is one of the cornerstones of random-order analysis beyond the secretary problem. Meyerson's algorithm was shown to be (asymptotically) optimal in the standard worst-case adversarial-order model and $8$-competitive in the random order model. While this bound in the random-order model is the long-standing state-of-the-art, it is not known to be tight, and the true competitive-ratio of Meyerson's algorithm remained an open question for more than two decades. We resolve this question and prove tight bounds on the competitive-ratio of Meyerson's algorithm in the random-order model, showing that it is exactly $4$-competitive. Following our tight analysis, we introduce a generic parameterized version of Meyerson's algorithm that retains all the advantages of the original version. We show that the best algorithm in this family is exactly $3$-competitive. On the other hand, we show that no online algorithm for this problem can achieve a competitive-ratio better than $2$. Finally, we prove that the algorithms in this family are robust to partial adversarial arrival orders.
The world is dynamic and changes over time, thus any optimization problem used to model real life problems must address this dynamic nature, taking into account the cost of changes … The world is dynamic and changes over time, thus any optimization problem used to model real life problems must address this dynamic nature, taking into account the cost of changes to a solution over time. The multistage model was introduced with this goal in mind. In this model we are given a series of instances of an optimization problem, corresponding to different times, and a solution is provided for each instance. The strive for obtaining near-optimal solutions for each instance on one hand, while maintaining similar solutions for consecutive time units on the other hand, is quantified and integrated into the objective function. In this paper we consider the Generalized Multistage $d$-Knapsack problem, a generalization of the multistage variants of the Multiple Knapsack problem, as well as the $d$-Dimensional Knapsack problem. We present a PTAS for Generalized Multistage $d$-Knapsack.
We study the classical online bipartite matching problem: One side of the graph is known and vertices of the other side arrive online. It is well known that when the … We study the classical online bipartite matching problem: One side of the graph is known and vertices of the other side arrive online. It is well known that when the graph is edge-weighted, and vertices arrive in an adversarial order, no online algorithm has a nontrivial competitive-ratio. To bypass this hurdle we modify the rules such that the adversary still picks the graph but has to reveal a random part (say half) of it to the player. The remaining part is given to the player in an adversarial order. This models practical scenarios in which the online algorithm has some history to learn from. This way of modeling a history was formalized recently by the authors (SODA 20) and was called the AOS model (for Adversarial Online with a Sample). It allows developing online algorithms for the secretary problem that compete even when the secretaries arrive in an adversarial order. Here we use the same model to attack the much more challenging matching problem. We analyze a natural algorithmic framework that decides how to match an arriving vertex $v$ by applying an offline matching algorithm to $v$ and the sample. We get roughly $1/4$ of the maximum weight by applying the offline greedy matching algorithm to the sample and $v$. Our analysis ties the performance of this algorithm to the performance of the offline greedy matching on the online part and we also prove that it is tight. Surprisingly, when replacing greedy with an optimal algorithm for maximum matching, no constant competitive-ratio can be guaranteed when the size of the sample is comparable to the size of the online part. However, when the sample is quadratic in the size of the online part, we do get a competitive-ratio of $1/e$.
We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a … We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pal for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.
Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) … Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) instantiated over general purpose servers at distributed cloud locations. We address the design of fast approximation algorithms for the NFV service distribution problem (NSDP), whose goal is to determine the placement of VNFs, the routing of service flows, and the associated allocation of cloud and network resources that satisfy client demands with minimum cost. We show that in the case of load-proportional costs, the resulting fractional NSDP can be formulated as a multi-commodity-chain flow problem on a cloud augmented graph, and design a queue-length based algorithm, named QNSD, that provides an O(\epsilon) approximation in time O(1/\epsilon). We then address the case in which resource costs are a function of the integer number of allocated resources and design a variation of QNSD that effectively pushes for flow consolidation into a limited number of active resources to minimize overall cloud network cost.
The world is dynamic and changes over time, thus any optimization problem used to model real life problems must address this dynamic nature, taking into account the cost of changes … The world is dynamic and changes over time, thus any optimization problem used to model real life problems must address this dynamic nature, taking into account the cost of changes to a solution over time. The multistage model was introduced with this goal in mind. In this model we are given a series of instances of an optimization problem, corresponding to different times, and a solution is provided for each instance. The strive for obtaining near-optimal solutions for each instance on one hand, while maintaining similar solutions for consecutive time units on the other hand, is quantified and integrated into the objective function. In this paper we consider the Generalized Multistage $d$-Knapsack problem, a generalization of the multistage variants of the Multiple Knapsack problem, as well as the $d$-Dimensional Knapsack problem. We present a PTAS for Generalized Multistage $d$-Knapsack.
We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a … We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and P\'al for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.
Network-wide traffic analytics are often needed for various network monitoring tasks. These measurements are often performed by collecting samples at network switches, which are then sent to the controller for … Network-wide traffic analytics are often needed for various network monitoring tasks. These measurements are often performed by collecting samples at network switches, which are then sent to the controller for aggregation. However, performing such analytics without ``overcounting'' flows or packets that traverse multiple measurement switches is challenging. Therefore, existing solutions often simplify the problem by making assumptions on the routing or measurement switch placement. We introduce AROMA, a measurement infrastructure that generates a uniform sample of packets and flows regardless of the topology, workload and routing. Therefore, AROMA can be deployed in many settings, and can also work in the data plane using programmable PISA switches. The AROMA infrastructure includes controller algorithms that approximate a variety of essential measurement tasks while providing formal accuracy guarantees. Using extensive simulations on real-world network traces, we show that our algorithms are competitively accurate compared to the best existing solutions despite the fact that they make no assumptions on the underlying network or the placement of measurement switches.
We study the problem of maximizing a monotone submodular function subject to a Multiple Knapsack constraint (SMKP) . The input is a set $I$ of items, each associated with a … We study the problem of maximizing a monotone submodular function subject to a Multiple Knapsack constraint (SMKP) . The input is a set $I$ of items, each associated with a non-negative weight, and a set of bins, each having a capacity. Also, we are given a submodular, monotone and non-negative function $f$ over subsets of the items. The objective is to find a subset of items $A \subseteq I$ and a packing of the items in the bins, such that $f(A)$ is maximized. SMKP is a natural extension of both Multiple Knapsack and the problem of monotone submodular maximization subject to a knapsack constraint. Our main result is a nearly optimal polynomial time $(1-e^{-1}-\varepsilon)$-approximation algorithm for the problem, for any $\varepsilon>0$. Our algorithm relies on a refined analysis of techniques for constrained submodular optimization combined with sophisticated application of tools used in the development of approximation schemes for packing problems.
Network-wide traffic analytics are often needed for various network monitoring tasks. These measurements are often performed by collecting samples at network switches, which are then sent to the controller for … Network-wide traffic analytics are often needed for various network monitoring tasks. These measurements are often performed by collecting samples at network switches, which are then sent to the controller for aggregation. However, performing such analytics without ``overcounting'' flows or packets that traverse multiple measurement switches is challenging. Therefore, existing solutions often simplify the problem by making assumptions on the routing or measurement switch placement. We introduce AROMA, a measurement infrastructure that generates a uniform sample of packets and flows regardless of the topology, workload and routing. Therefore, AROMA can be deployed in many settings, and can also work in the data plane using programmable PISA switches. The AROMA infrastructure includes controller algorithms that approximate a variety of essential measurement tasks while providing formal accuracy guarantees. Using extensive simulations on real-world network traces, we show that our algorithms are competitively accurate compared to the best existing solutions despite the fact that they make no assumptions on the underlying network or the placement of measurement switches.
We study the problem of maximizing a monotone submodular function subject to a Multiple Knapsack constraint. The input is a set $I$ of items, each has a non-negative weight, and … We study the problem of maximizing a monotone submodular function subject to a Multiple Knapsack constraint. The input is a set $I$ of items, each has a non-negative weight, and a set of bins of arbitrary capacities. Also, we are given a submodular, monotone and non-negative function $f$ over subsets of the items. The objective is to find a packing of a subset of items $A \subseteq I$ in the bins such that $f(A)$ is maximized. Our main result is an almost optimal polynomial time $(1-e^{-1}-\varepsilon)$-approximation algorithm for the problem, for any $\varepsilon>0$. The algorithm relies on a structuring technique which converts a general multiple knapsack constraint to a constraint in which the bins are partitioned into groups of exponentially increasing cardinalities, each consisting of bins of uniform capacity. We derive the result by combining structuring with a refined analysis of techniques for submodular optimization subject to knapsack constraints.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.2082 - 2095Chapter DOI:https://doi.org/10.1137/1.9781611975994.128PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample … We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models.
We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to … We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the $d$-dimensional generalized assignment problem, where we introduce a novel technique that achieves an $O(d)$-competitive algorithms and prove a matching lower bound of $Ω(d)$. Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem.
We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample … We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models.
Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) … Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) instantiated over general purpose servers at distributed cloud locations. We address the design of fast approximation algorithms for the NFV service distribution problem (NSDP), whose goal is to determine the placement of VNFs, the routing of service flows, and the associated allocation of cloud and network resources that satisfy client demands with minimum cost. We show that in the case of load-proportional costs, the resulting fractional NSDP can be formulated as a multi-commodity-chain flow problem on a cloud-augmented graph, and design a queue-length based algorithm, named QNSD, that provides an O(ε) approximation in time O (1/ε). We then address the case in which resource costs are a function of the integer number of allocated resources and design a variation of QNSD that effectively pushes for flow consolidation into a limited number of active resources to minimize overall cloud network cost.
5G networks are expected to be more dynamic and chaotic in their structure than current networks. With the advent of Network Function Virtualization (NFV), Network Functions (NF) will no longer … 5G networks are expected to be more dynamic and chaotic in their structure than current networks. With the advent of Network Function Virtualization (NFV), Network Functions (NF) will no longer be tightly coupled with the hardware they are running on, which poses new challenges in network management. Noisy neighbor is a term commonly used to describe situations in NFV infrastructure where an application experiences degradation in performance due to the fact that some of the resources it needs are occupied by other applications in the same cloud node. These situations cannot be easily identified using straightforward approaches, which calls for the use of sophisticated methods for NFV infrastructure management. In this paper we demonstrate how Machine Learning (ML) techniques can be used to identify such events. Through experiments using data collected at real NFV infrastructure, we show that standard models for automated classification can detect the noisy neighbor phenomenon with an accuracy of more than 90% in a simple scenario.
Resource allocation for cloud services is a complex task due to the diversity of the services and the dynamic workloads. One way to address this is by overprovisioning which results … Resource allocation for cloud services is a complex task due to the diversity of the services and the dynamic workloads. One way to address this is by overprovisioning which results in high cost due to the unutilized resources. A much more economical approach, relying on the stochastic nature of the demand, is to allocate just the right amount of resources and use additional more expensive mechanisms in case of overflow situations where demand exceeds the capacity. In this paper we study this approach and show both by comprehensive analysis for independent normal distributed demands and simulation on synthetic data that it is significantly better than currently deployed methods.
Most discovery systems for silent failures work in two phases: a continuous monitoring phase that detects presence of failures through probe packets and a localization phase that pinpoints the faulty … Most discovery systems for silent failures work in two phases: a continuous monitoring phase that detects presence of failures through probe packets and a localization phase that pinpoints the faulty element(s). This separation is important because localization requires significantly more resources than detection and should be initiated only when a fault is present. We focus on improving the efficiency of the detection phase, where the goal is to balance the overhead with the cost associated with longer failure detection times. We formulate a general model which unifies the treatment of probe scheduling mechanisms, stochastic or deterministic, and different cost objectives - minimizing average detection time (SUM) or worst-case detection time (MAX). We then focus on two classes of schedules. {\em Memoryless schedules} -- a subclass of stochastic schedules which is simple and suitable for distributed deployment. We show that the optimal memorlyess schedulers can be efficiently computed by convex programs (for SUM objectives) or linear programs (for MAX objectives), and surprisingly perhaps, are guaranteed to have expected detection times that are not too far off the (NP hard) stochastic optima. {\em Deterministic schedules} allow us to bound the maximum (rather than expected) cost of undetected faults, but like stochastic schedules, are NP hard to optimize. We develop novel efficient deterministic schedulers with provable approximation ratios. An extensive simulation study on real networks, demonstrates significant performance gains of our memoryless and deterministic schedulers over previous approaches. Our unified treatment also facilitates a clear comparison between different objectives and scheduling mechanisms.
Most discovery systems for silent failures work in two phases: a continuous monitoring phase that detects presence of failures through probe packets and a localization phase that pinpoints the faulty … Most discovery systems for silent failures work in two phases: a continuous monitoring phase that detects presence of failures through probe packets and a localization phase that pinpoints the faulty element(s). This separation is important because localization requires significantly more resources than detection and should be initiated only when a fault is present. We focus on improving the efficiency of the detection phase, where the goal is to balance the overhead with the cost associated with longer failure detection times. We formulate a general model which unifies the treatment of probe scheduling mechanisms, stochastic or deterministic, and different cost objectives - minimizing average detection time (SUM) or worst-case detection time (MAX). We then focus on two classes of schedules. {\em Memoryless schedules} -- a subclass of stochastic schedules which is simple and suitable for distributed deployment. We show that the optimal memorlyess schedulers can be efficiently computed by convex programs (for SUM objectives) or linear programs (for MAX objectives), and surprisingly perhaps, are guaranteed to have expected detection times that are not too far off the (NP hard) stochastic optima. {\em Deterministic schedules} allow us to bound the maximum (rather than expected) cost of undetected faults, but like stochastic schedules, are NP hard to optimize. We develop novel efficient deterministic schedulers with provable approximation ratios. An extensive simulation study on real networks, demonstrates significant performance gains of our memoryless and deterministic schedulers over previous approaches. Our unified treatment also facilitates a clear comparison between different objectives and scheduling mechanisms.

Commonly Cited References

Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) … Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) instantiated over general purpose servers at distributed cloud locations. We address the design of fast approximation algorithms for the NFV service distribution problem (NSDP), whose goal is to determine the placement of VNFs, the routing of service flows, and the associated allocation of cloud and network resources that satisfy client demands with minimum cost. We show that in the case of load-proportional costs, the resulting fractional NSDP can be formulated as a multi-commodity-chain flow problem on a cloud-augmented graph, and design a queue-length based algorithm, named QNSD, that provides an O(ε) approximation in time O (1/ε). We then address the case in which resource costs are a function of the integer number of allocated resources and design a variation of QNSD that effectively pushes for flow consolidation into a limited number of active resources to minimize overall cloud network cost.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)Competitive Analysis with a Sample and the Secretary ProblemHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.2082 - 2095Chapter DOI:https://doi.org/10.1137/1.9781611975994.128PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We extend the standard online worst-case model to accommodate past experience which is available to the online player in many practical scenarios. We do this by revealing a random sample of the adversarial input to the online player ahead of time. The online player competes with the expected optimal value on the part of the input that arrives online. Our model bridges between existing online stochastic models (e.g., items are drawn i.i.d. from a distribution) and the online worst-case model. We also extend in a similar manner (by revealing a sample) the online random-order model. We study the classical secretary problem in our new models. In the worst-case model we present a simple online algorithm with optimal competitive-ratio for any sample size. In the random-order model, we also give a simple online algorithm with an almost tight competitive-ratio for small sample sizes. Interestingly, we prove that for a large enough sample, no algorithm can be simultaneously optimal both in the worst-cast and random-order models. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, … A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning $f(S)$ for a given $S$. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, e.g., the optimality of $(1-1/e)$-approximation for monotone submodular maximization under a cardinality constraint and the impossibility of $(\frac12+\epsilon)$-approximation for unconstrained (nonmonotone) submodular maximization. As a new application, we consider the problem of maximizing a nonmonotone submodular function over the bases of a matroid. A $(\frac16-o(1))$-approximation has been developed for this problem, assuming that the matroid contains two disjoint bases. We show that the best approximation one can achieve is indeed related to packings of bases in the matroid. Specifically, for any $k \geq 2$, there is a class of matroids of fractional base packing number $\nu = \frac{k}{k-1}$ such that any algorithm achieving a better than $(1-\frac{1}{\nu}) = \frac{1}{k}$-approximation for this class would require exponentially many value queries. In particular, there is no constant factor approximation for maximizing a nonmonotone submodular function over the bases of a general matroid. On the positive side, we present a $\frac12 (1-\frac{1}{\nu}-o(1))$-approximation algorithm assuming fractional base packing number at least $\nu$, where $\nu \in (1,2]$. We also present an improved $0.309$-approximation for maximization of a nonmonotone submodular function subject to a matroid independence constraint, improving the previously known factor of $\frac14-\epsilon$. For this problem, we obtain a hardness of $(\frac12 + \epsilon)$-approximation for any fixed $\epsilon>0$.
Motivated by several applications, we consider the problem of randomly rounding a fractional solution in a matroid (base) polytope to an integral one. We consider the pipage rounding technique and … Motivated by several applications, we consider the problem of randomly rounding a fractional solution in a matroid (base) polytope to an integral one. We consider the pipage rounding technique and also present a new technique, randomized swap rounding. Our main technical results are concentration bounds for functions of random variables arising from these rounding techniques. We prove Chernoff-type concentration bounds for linear functions of random variables arising from both techniques, and also a lower-tail exponential bound for monotone submodular functions of variables arising from randomized swap rounding. The following are examples of our applications: (1) We give a (1-1/e-epsilon)-approximation algorithm for the problem of maximizing a monotone submodular function subject to 1 matroid and k linear constraints, for any constant k and epsilon>0. (2) We present a result on minimax packing problems that involve a matroid base constraint. We give an O(log m / log log m)-approximation for the general problem Min {lambda: x \in {0,1}^N, x \in B(M), Ax <= lambda b}, where m is the number of packing constraints. (3) We generalize the continuous greedy algorithm to problems involving multiple submodular functions, and use it to find a (1-1/e-epsilon)-approximate pareto set for the problem of maximizing a constant number of monotone submodular functions subject to a matroid constraint. An example is the Submodular Welfare Problem where we are looking for an approximate pareto set with respect to individual players' utilities.
We study packing linear programs (LPs) in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies … We study packing linear programs (LPs) in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies motivated, e.g., by online ad allocations and yield management, where rows correspond to resources and columns to requests specifying demands for resources. Our main contribution is a $1-O(\sqrt{\nicefrac{(\log d)}{B}})$-competitive online algorithm. Here $d$ denotes the column sparsity, i.e., the maximum number of resources that occur in a single column, and $B$ denotes the capacity ratio $B$, i.e., the ratio between the capacity of a resource and the maximum demand for this resource. In other words, we achieve a $(1-\epsilon)$-approximation if the capacity ratio satisfies $B=\Omega(\frac{\log d}{\epsilon^2})$, which is known to be the best possible for any (randomized) online algorithms. Our result improves exponentially on previous work with respect to the capacity ratio. In contrast to existing results on packing LP problems, our algorithm does not use dual prices to guide the allocation of resources over time. Instead, the algorithm simply solves, for each request, a scaled version of the partially known primal program and randomly rounds the obtained fractional solution to obtain an integral allocation for this request. We show that this simple algorithmic technique is not restricted to packing LPs with large capacity ratio of order $\Omega(\log d)$, but also yields close-to-optimal competitive ratios if the capacity ratio is bounded by a constant. In particular, we prove an upper bound on the competitive ratio of $\Omega(d^{\nicefrac{-1}{(B-1)}})$ for any $B\geq2$. In addition, we show that our approach can be combined with VCG payments and obtain an incentive-compatible $(1-\epsilon)$-competitive mechanism for packing LPs with $B=\Omega(\frac{\log m}{\epsilon^2})$, where $m$ is the number of constraints. Finally, we apply our technique to the generalized assignment problem for which we obtain the first online algorithm with competitive ratio $O(1)$.
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from … A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
Article Polynomial-time approximation scheme for data broadcast Share on Authors: Claire Kenyon View Profile , Nicolas Schabanel View Profile , Neal Young View Profile Authors Info & Claims STOC '00: … Article Polynomial-time approximation scheme for data broadcast Share on Authors: Claire Kenyon View Profile , Nicolas Schabanel View Profile , Neal Young View Profile Authors Info & Claims STOC '00: Proceedings of the thirty-second annual ACM symposium on Theory of computingMay 2000 Pages 659–666https://doi.org/10.1145/335305.335398Online:01 May 2000Publication History 44citation329DownloadsMetricsTotal Citations44Total Downloads329Last 12 Months3Last 6 weeks1 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access
This paper considers time-average stochastic optimization, where a time average decision vector, an average of decision vectors chosen in every time step from a time-varying (possibly non-convex) set, minimizes a … This paper considers time-average stochastic optimization, where a time average decision vector, an average of decision vectors chosen in every time step from a time-varying (possibly non-convex) set, minimizes a convex objective function and satisfies convex constraints. This formulation has applications in networking and operations research. In general, time-average stochastic optimization can be solved by a Lyapunov optimization technique. This paper shows that the technique exhibits a transient phase and a steady state phase. When the problem has a unique vector of Lagrange multipliers, the convergence time can be improved. By starting the time average in the steady state, the convergence times become O(1/ε) under a locally-polyhedral assumption and O(1/ε <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1.5</sup> ) under a locally-non-polyhedral assumption, where e denotes the proximity to the optimal objective cost.
We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the … We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1−1/e)-competitive algorithm in their seminal work [10].Our main result is an optimal (1−1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets.
Most discovery systems for silent failures work in two phases: a continuous monitoring phase that detects presence of failures through probe packets and a localization phase that pinpoints the faulty … Most discovery systems for silent failures work in two phases: a continuous monitoring phase that detects presence of failures through probe packets and a localization phase that pinpoints the faulty element(s). This separation is important because localization requires significantly more resources than detection and should be initiated only when a fault is present. We focus on improving the efficiency of the detection phase, where the goal is to balance the overhead with the cost associated with longer failure detection times. We formulate a general model which unifies the treatment of probe scheduling mechanisms, stochastic or deterministic, and different cost objectives - minimizing average detection time (SUM) or worst-case detection time (MAX). We then focus on two classes of schedules. {\em Memoryless schedules} -- a subclass of stochastic schedules which is simple and suitable for distributed deployment. We show that the optimal memorlyess schedulers can be efficiently computed by convex programs (for SUM objectives) or linear programs (for MAX objectives), and surprisingly perhaps, are guaranteed to have expected detection times that are not too far off the (NP hard) stochastic optima. {\em Deterministic schedules} allow us to bound the maximum (rather than expected) cost of undetected faults, but like stochastic schedules, are NP hard to optimize. We develop novel efficient deterministic schedulers with provable approximation ratios. An extensive simulation study on real networks, demonstrates significant performance gains of our memoryless and deterministic schedulers over previous approaches. Our unified treatment also facilitates a clear comparison between different objectives and scheduling mechanisms.
Network management protocols often require timely and meaningful insight about per flow network traffic. This paper introduces Randomized Admission Policy (RAP) - a novel algorithm for the frequency and top-k … Network management protocols often require timely and meaningful insight about per flow network traffic. This paper introduces Randomized Admission Policy (RAP) - a novel algorithm for the frequency and top-k estimation problems, which are fundamental in network monitoring. We demonstrate space reductions compared to the alternatives by a factor of up to 32 on real packet traces and up to 128 on heavy-tailed workloads. For top-k identification, RAP exhibits memory savings by a factor of between 4 and 64 depending on the workloads' skewness. These empirical results are backed by formal analysis, indicating the asymptotic space improvement of our probabilistic admission approach. Additionally, we present d-Way RAP, a hardware friendly variant of RAP that empirically maintains its space and accuracy benefits.
Identifying the "heavy hitter" flows or flows with large traffic volumes in the data plane is important for several applications e.g., flow-size aware routing, DoS detection, and traffic engineering. However, … Identifying the "heavy hitter" flows or flows with large traffic volumes in the data plane is important for several applications e.g., flow-size aware routing, DoS detection, and traffic engineering. However, measurement in the data plane is constrained by the need for line-rate processing (at 10-100Gb/s) and limited memory in switching hardware. We propose HashPipe, a heavy hitter detection algorithm using emerging programmable data planes. HashPipe implements a pipeline of hash tables which retain counters for heavy flows while evicting lighter flows over time. We prototype HashPipe in P4 and evaluate it with packet traces from an ISP backbone link and a data center. On the ISP trace (which contains over 400,000 flows), we find that HashPipe identifies 95% of the 300 heaviest flows with less than 80KB of memory.
Monitoring tasks, such as anomaly and DDoS detection, require identifying frequent flow aggregates based on common IP prefixes. These are known as hierarchical heavy hitters (HHH), where the hierarchy is … Monitoring tasks, such as anomaly and DDoS detection, require identifying frequent flow aggregates based on common IP prefixes. These are known as hierarchical heavy hitters (HHH), where the hierarchy is determined based on the type of prefixes of interest in a given application. The per packet complexity of existing HHH algorithms is proportional to the size of the hierarchy, imposing significant overheads.
The need for deeply understanding when algorithms work (or not) has never been greater. The need for deeply understanding when algorithms work (or not) has never been greater.
Many systems have to be maintained while the underlying constraints, costs and/or profits change over time. Although the state of a system may evolve during time, a non-negligible transition cost … Many systems have to be maintained while the underlying constraints, costs and/or profits change over time. Although the state of a system may evolve during time, a non-negligible transition cost is incured for transitioning from one state to another. In order to model such situations, Gupta et al. (ICALP 2014) and Eisenstat et al. (ICALP 2014) introduced a multistage model where the input is a sequence of instances (one for each time step), and the goal is to find a sequence of solutions (one for each time step) that are both (i) near optimal for each time step and (ii) as stable as possible. We focus on the multistage version of the Knapsack problem where we are given a time horizon t=1,2,...,T, and a sequence of knapsack instances I_1,I_2,...,I_T, one for each time step, defined on a set of n objects. In every time step t we have to choose a feasible knapsack S_t of I_t, which gives a knapsack profit. To measure the stability/similarity of two consecutive solutions S_t and S_{t+1}, we identify the objects for which the decision, to be picked or not, remains the same in S_t and S_{t+1}, giving a transition profit. We are asked to produce a sequence of solutions S_1,S_2,...,S_T so that the total knapsack profit plus the overall transition profit is maximized. We propose a PTAS for the Multistage Knapsack problem. Then, we prove that there is no FPTAS for the problem even in the case where T=2, unless P=NP. Furthermore, we give a pseudopolynomial time algorithm for the case where the number of steps is bounded by a fixed constant and we show that otherwise the problem remains NP-hard even in the case where all the weights, profits and capacities are 0 or 1.
A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column … A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column along with the corresponding objective coefficient. In such a model, a decision variable has to be set each time a column is revealed without observing the future inputs and the goal is to maximize the overall objective function. In this paper, we provide a near-optimal algorithm for this general class of online problems under the assumption of random order of arrival and some mild conditions on the size of the LP right-hand-side input. Specifically, our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from the revealed columns in the previous period are used to determine the sequential decisions in the current period. Due to the feature of dynamic learning, the competitiveness of our algorithm improves over the past study of the same problem. We also present a worst-case example showing that the performance of our algorithm is near-optimal.
In this paper we study three previously unstudied variants of the online Facility Location problem, considering an intrinsic scenario when the clients and facilities are not only allowed to arrive … In this paper we study three previously unstudied variants of the online Facility Location problem, considering an intrinsic scenario when the clients and facilities are not only allowed to arrive to the system, but they can also depart at any moment. We begin with the study of a natural fully-dynamic online uncapacitated model where clients can be both added and removed. When a client arrives, then it has to be assigned either to an existing facility or to a new facility opened at the client's location. However, when a client who has been also one of the open facilities is to be removed, then our model has to allow to reconnect all clients that have been connected to that removed facility. In this model, we present an optimal O(log n_act / log log n_act)-competitive algorithm, where n_act is the number of active clients at the end of the input sequence. Next, we turn our attention to the capacitated Facility Location problem. We first note that if no deletions are allowed, then one can achieve an optimal competitive ratio of O(log n/ log log n), where n is the length of the sequence. However, when deletions are allowed, the capacitated version of the problem is significantly more challenging than the uncapacitated one. We show that still, using a more sophisticated algorithmic approach, one can obtain an online O(log m + log c log n)-competitive algorithm for the capacitated Facility Location problem in the fully dynamic model, where m is number of points in the input metric and c is the capacity of any open facility.
We consider the problem of online load balancing under ℓp-norms: sequential jobs need to be assigned to one of the machines and the goal is to minimize the ℓp-norm of … We consider the problem of online load balancing under ℓp-norms: sequential jobs need to be assigned to one of the machines and the goal is to minimize the ℓp-norm of the machine loads. This generalizes the classical problem of scheduling for makespan minimization (case and has been thoroughly studied. However, despite the recent push for beyond worst-case analyses, no such results are known for this problem.In this paper we provide algorithms with simultaneous guarantees for the worst-case model as well as for the random-order (i.e. secretary) model, where an arbitrary set of jobs comes in random order. First, we show that the greedy algorithm (with restart), known to have optimal O(p) worst-case guarantee, also has a (typically) improved random-order guarantee. However, the behavior of this algorithm in the random-order model degrades with p. We then propose algorithm SimultaneousLB that has simultaneously optimal guarantees (within constants) in both worst-case and random- order models. In particular, the random-order guarantee of SimultaneousLB improves as p increases.One of the main components is a new algorithm with improved regret for Online Linear Optimization (OLO) over the non-negative vectors in the ℓq ball. Interestingly, this OLO algorithm is also used to prove a purely probabilistic inequality that controls the correlations arising in the random-order model, a common source of difficulty for the analysis. Another important component used in both SimultaneousLB and our OLO algorithm is a smoothing of the ℓp-norm that may be of independent interest. This smoothness property allows us to see algorithm SimultaneousLB as essentially a greedy one in the worst-case model and as a primal-dual one in the random-order model, which is instrumental for its simultaneous guarantees.
We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to … We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the $d$-dimensional generalized assignment problem, where we introduce a novel technique that achieves an $O(d)$-competitive algorithms and prove a matching lower bound of $Ω(d)$. Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem.
The secretary problem became one of the most prominent online selection problems due to its numerous applications in online mechanism design. The task is to select a maximum weight subset … The secretary problem became one of the most prominent online selection problems due to its numerous applications in online mechanism design. The task is to select a maximum weight subset of elements subject to given constraints, where elements arrive one-by-one in random order, revealing a weight upon arrival. The decision whether to select an element has to be taken immediately after its arrival. The different applications that map to the secretary problem ask for different constraint families to be handled. The most prominent ones are matroid constraints, which both capture many relevant settings and admit strongly competitive secretary algorithms. However, dealing with more involved constraints proved to be much more difficult, and strong algorithms are known only for a few specific settings. In this paper, we present a general framework for dealing with the secretary problem over the intersection of several matroids. This framework allows us to combine and exploit the large set of matroid secretary algorithms known in the literature. As one consequence, we get constant-competitive secretary algorithms over the intersection of any constant number of matroids whose corresponding (single-)matroid secretary problems are currently known to have a constant-competitive algorithm. Moreover, we show that our results extend to submodular objectives.MSC codesmatroid secretary problemmatroid intersectiononline algorithms
We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. … We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors' arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/e-competitive in our model even for bipartite graphs.
Huang et al. (STOC 2018) introduced the fully online matching problem, a generalization of the classic online bipartite matching problem in that it allows all vertices to arrive online and … Huang et al. (STOC 2018) introduced the fully online matching problem, a generalization of the classic online bipartite matching problem in that it allows all vertices to arrive online and considers general graphs. They showed that the ranking algorithm by Karp et al. (STOC 1990) is strictly better than 0.5-competitive and the problem is strictly harder than the online bipartite matching problem in that no algorithms can be (1 --- 1/e)-competitive.This paper pins down two tight competitive ratios of classic algorithms for the fully online matching problem. For the fractional version of the problem, we show that a natural instantiation of the water-filling algorithm is [MATH HERE]-competitive, together with a matching hardness result. Interestingly, our hardness result applies to arbitrary algorithms in the edge-arrival models of the online matching problem, improving the state-of-art [MATH HERE] upper bound. For integral algorithms, we show a tight competitive ratio of ≈ 0.567 for the ranking algorithm on bipartite graphs, matching a hardness result by Huang et al. (STOC 2018).
The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only … The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only on one side, and presented optimal deterministic and randomized algorithms for this setting. In comparison, more general arrival models, such as edge arrivals and general vertex arrivals, have proven more challenging and positive results are known only for various relaxations of the problem. In particular, even the basic question of whether randomization allows one to beat the trivially-optimal deterministic competitive ratio of 1/2 for either of these models was open. In this paper, we resolve this question for both these natural arrival models, and show the following.1) For edge arrivals, randomization does not help - no randomized algorithm is better than 1/2 competitive. 2)For general vertex arrivals, randomization helps - there exists a randomized (1/2+Ω(1)) -competitive online matching algorithm.
The Virtual Network Embedding Problem (VNEP) captures the essence of many resource allocation problems. In the VNEP, customers request resources in the form of Virtual Networks. An embedding of a … The Virtual Network Embedding Problem (VNEP) captures the essence of many resource allocation problems. In the VNEP, customers request resources in the form of Virtual Networks. An embedding of a virtual network on a shared physical infrastructure is the joint mapping of (virtual) nodes to physical servers together with the mapping of (virtual) edges onto paths in the physical network connecting the respective servers. This work initiates the study of approximation algorithms for the VNEP for general request graphs. Concretely, we study the offline setting with admission control: given multiple requests, the task is to embed the most profitable subset while not exceeding resource capacities. Our approximation is based on the randomized rounding of Linear Programming (LP) solutions. Interestingly, we uncover that the standard LP formulation for the VNEP exhibits an inherent structural deficit when considering general virtual network topologies: its solutions cannot be decomposed into valid embeddings. In turn, focusing on the class of cactus request graphs, we devise a novel LP formulation, whose solutions can be decomposed. Proving performance guarantees of our rounding scheme, we obtain the first approximation algorithm for the VNEP in the resource augmentation model. We propose different types of rounding heuristics and evaluate their performance in an extensive computational study. Our results indicate that good solutions can be achieved even without resource augmentations. Specifically, heuristical rounding achieves 77.2% of the baseline's profit on average while respecting capacities.
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider … The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 − 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While it avoids the pessimism of the traditional adversarial model, in practice we … The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While it avoids the pessimism of the traditional adversarial model, in practice we cannot expect the input to be presented in perfectly random order. This has motivated research on ``best of both worlds'' (algorithms with good performance on both purely stochastic and purely adversarial inputs), or even better, on inputs that are a mix of both stochastic and adversarial parts. Unfortunately the latter seems much harder to achieve and very few results of this type are known. Towards advancing our understanding of designing such robust algorithms, we propose a random-order model with bursts of adversarial time steps. The assumption of burstiness of unexpected patterns is reasonable in many contexts, since changes (e.g. spike in a demand for a good) are often triggered by a common external event. We then consider the Knapsack Secretary problem in this model: there is a knapsack of size $k$ (e.g., available quantity of a good), and in each of the $n$ time steps an item comes with its value and size in $[0,1]$ and the algorithm needs to make an irrevocable decision whether to accept or reject the item. We design an algorithm that gives an approximation of $1 - \tilde{O}(Γ/k)$ when the adversarial time steps can be covered by $Γ\ge \sqrt{k}$ intervals of size $\tilde{O}(\frac{n}{k})$. In particular, setting $Γ= \sqrt{k}$ gives a $(1 - O(\frac{\ln^2 k}{\sqrt{k}}))$-approximation that is resistant to up to a $\frac{\ln^2 k}{\sqrt{k}}$-fraction of the items being adversarial, which is almost optimal even in the absence of adversarial items. Also, setting $Γ= \tildeΩ(k)$ gives a constant approximation that is resistant to up to a constant fraction of items being adversarial.
A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an … A prophet inequality states, for some $α\in[0,1]$, that the expected value achievable by a gambler who sequentially observes random variables $X_1,\dots,X_n$ and selects one of them is at least an $α$ fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional $βn$ samples for some $β\geq 0$. We first give improved lower bounds on $α$ for a wide range of values of $β$; specifically, $α\geq(1+β)/e$ when $β\leq 1/(e-1)$, which is tight, and $α\geq 0.648$ when $β=1$, which improves on a bound of around $0.635$ due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of $1/e$ for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.
A multiple knapsack constraint over a set of items is defined by a set of bins of arbitrary capacities, and a weight for each of the items. An assignment for … A multiple knapsack constraint over a set of items is defined by a set of bins of arbitrary capacities, and a weight for each of the items. An assignment for the constraint is an allocation of subsets of items to the bins, such that the total weight of items assigned to each bin is bounded by the bin capacity. We study modular (linear) and submodular maximization problems subject to a constant number of (i.e., $d$-resource) multiple knapsack constraints, in which a solution is a subset of items, along with an assignment of the selected items for each of the $d$ multiple knapsack constraints. Our results include a polynomial time approximation scheme for modular maximization with a constant number of multiple knapsack constraints and a matroid constraint, thus generalizing the best known results for the classic multiple knapsack problem as well as d-dimensional knapsack, for any $d \geq 2$. We further obtain a tight $(1-e^{-1}-\epsilon)$-approximation for monotone submodular optimization subject to a constant number of multiple knapsack constraints and a matroid constraint, and a $(0.385-\epsilon)$-approximation for non-monotone submodular optimization subject to a constant number of multiple knapsack constraints. At the heart of our algorithms lies a novel representation of a multiple knapsack constraint as a polytope. We consider this key tool as a main technical contribution of this paper.
We provide online algorithms for secretary matching in general weighted graphs, under the well-studied models of vertex and edge arrivals. In both models, edges are associated with arbitrary weights that … We provide online algorithms for secretary matching in general weighted graphs, under the well-studied models of vertex and edge arrivals. In both models, edges are associated with arbitrary weights that are unknown from the outset, and are revealed online. Under vertex arrival, vertices arrive online in a uniformly random order; upon the arrival of a vertex $v$, the weights of edges from $v$ to all previously arriving vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. Under edge arrival, edges arrive online in a uniformly random order; upon the arrival of an edge $e$, its weight is revealed, and the algorithm decides whether to include it in the matching or not. We provide a $5/12$-competitive algorithm for vertex arrival, and show it is tight. For edge arrival, we provide a $1/4$-competitive algorithm. Both results improve upon state of the art bounds for the corresponding settings. Interestingly, for vertex arrival, secretary matching in general graphs outperforms secretary matching in bipartite graphs with 1-sided arrival, where $1/e$ is the best possible guarantee.
Programmable network switches promise flexibility and high throughput, enabling applications such as load balancing and traffic engineering. Network measurement is a fundamental building block for such applications, including tasks such … Programmable network switches promise flexibility and high throughput, enabling applications such as load balancing and traffic engineering. Network measurement is a fundamental building block for such applications, including tasks such as the identification of heavy hitters (largest flows) or the detection of traffic changes. However, high-throughput packet processing architectures place certain limitations on the programming model, such as restricted branching, limited capability for memory access, and a limited number of processing stages. These limitations restrict the types of measurement algorithms that can run on programmable switches. In this paper, we focus on the RMT programmable high-throughput switch architecture, and carefully examine its constraints on designing measurement algorithms. We demonstrate our findings while solving the heavy hitter problem. We introduce PRECISION, an algorithm that uses Probabilistic Recirculation to find top flows on a programmable switch. By recirculating a small fraction of packets, PRECISION simplifies the access to stateful memory to conform with RMT limitations and achieves higher accuracy than previous heavy hitter detection algorithms that avoid recirculation. We also analyze the effect of each architectural constraint on the measurement accuracy and provide insights for measurement algorithm designers.
Nearly thirty years ago, Bar-Noy, Motwani and Naor [IPL'92] conjectured that an online (1 + o(1))Δ-edge-coloring algorithm exists for n-node graphs of maximum degree Δ = ω(log n). This conjecture … Nearly thirty years ago, Bar-Noy, Motwani and Naor [IPL'92] conjectured that an online (1 + o(1))Δ-edge-coloring algorithm exists for n-node graphs of maximum degree Δ = ω(log n). This conjecture remains open in general, though it was recently proven for bipartite graphs under one-sided vertex arrivals by Cohen et al. [FOCS'19]. In a similar vein, we study edge coloring under widely-studied relaxations of the online model.Our main result is in the random-order online model. For this model, known results fall short of the Bar-Noy et al. conjecture, either in the degree bound [Aggarwal et al. FOCS'03], or number of colors used [Bahmani et al. SODA'10]. We achieve the best of both worlds, thus resolving the Bar-Noy et al. conjecture in the affirmative for this model.Our second result is in the adversarial online (and dynamic) model with recourse. A recent algorithm of Duan et al. [SODA'19] yields a (1 + ∊) Δ-edge-coloring with poly(log n/∊) recourse. We achieve the same with poly(1/∊) recourse, thus removing all dependence on n.Underlying our results is one common offline algorithm, which we show how to implement in these two online models. Our algorithm, based on the Rödl Nibble Method, is an adaptation of the distributed algorithm of Dubhashi et al. [TCS'98]. The Nibble Method has proven successful for distributed edge coloring. We display its usefulness in the context of online algorithms.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is … In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. Here, the optimal algorithm picks the maximum value with probability at least 1/e. However, by varying the available information, new interesting problems arise. For instance, in the full information variant of the secretary problem the values are i.i.d. samples from a known distribution. Naturally, the best possible success probability increases and turns out to be approximately 0.58. Also, the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature.In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability p. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with p-sampling (ROSp for short) and the adversarial order secretary problem with p-sampling (AOSp for short). Our main result is to obtain best possible algorithms for both problems and all values of p. As p grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of p only. Therefore, even knowledge of the total number of elements is unnecessary. Proving that this algorithm is optimal involves a novel technique, which boils down to analyzing a related game in a conflict graph over binary sequences. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements. Surprisingly, this sequence of time thresholds arises from a separable and convex optimization problem whose solution is independent of p.
Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for the unweighted bipartite matching … Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for the unweighted bipartite matching that achieves an optimal competitive ratio of 1- <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> /e. Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial 1/ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> -competitive greedy algorithm. In this paper, we present the first online algorithm that breaks the long-standing 1/ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> barrier and achieves a competitive ratio of at least 0.5086. In light of the hardness result of Kapralov, Post, and Vondrák (SODA 2013) that restricts beating a 1/ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> competitive ratio for the more general problem of monotone submodular welfare maximization, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in the online setting. The main ingredient in our online matching algorithm is a novel subroutine called online correlated selection (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.
The study of the prophet inequality problem in the limited information regime was initiated by Azar et al. [SODA'14] in the pursuit of prior-independent posted-price mechanisms. As they show, $O(1)$-competitive … The study of the prophet inequality problem in the limited information regime was initiated by Azar et al. [SODA'14] in the pursuit of prior-independent posted-price mechanisms. As they show, $O(1)$-competitive policies are achievable using only a single sample from the distribution of each agent. A notable portion of their results relies on reducing the design of single-sample prophet inequalities (SSPIs) to that of order-oblivious secretary (OOS) policies. The above reduction comes at the cost of not fully utilizing the available samples. However, to date, this is essentially the only method for proving SSPIs for many combinatorial sets. Very recently, Rubinstein et al. [ITCS'20] give a surprisingly simple algorithm which achieves the optimal competitive ratio for the single-choice SSPI problem $-$ a result which is unobtainable going through the reduction to secretary problems. Motivated by this discrepancy, we study the competitiveness of simple SSPI policies directly, without appealing to results from OOS literature. In this direction, we first develop a framework for analyzing policies against a greedy-like prophet solution. Using this framework, we obtain the first SSPI for general (non-bipartite) matching environments, as well as improved competitive ratios for transversal and truncated partition matroids. Second, motivated by the observation that many OOS policies for matroids decompose the problem into independent rank-$1$ instances, we provide a meta-theorem which applies to any matroid satisfying this partition property. Leveraging the recent results by Rubinstein et al., we obtain improved competitive guarantees (most by a factor of $2$) for a number of matroids captured by the reduction of Azar et al. Finally, we discuss applications of our SSPIs to the design of mechanisms for multi-dimensional limited information settings with improved revenue and welfare guarantees.
Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) … Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions (VNFs) instantiated over general purpose servers at distributed cloud locations. We address the design of fast approximation algorithms for the NFV service distribution problem (NSDP), whose goal is to determine the placement of VNFs, the routing of service flows, and the associated allocation of cloud and network resources that satisfy client demands with minimum cost. We show that in the case of load-proportional costs, the resulting fractional NSDP can be formulated as a multi-commodity-chain flow problem on a cloud augmented graph, and design a queue-length based algorithm, named QNSD, that provides an O(\epsilon) approximation in time O(1/\epsilon). We then address the case in which resource costs are a function of the integer number of allocated resources and design a variation of QNSD that effectively pushes for flow consolidation into a limited number of active resources to minimize overall cloud network cost.
This paper proposes a new approximation algorithm for the offline Virtual Network Embedding Problem (VNEP) with latency constraints. Our approximation algorithm Flex allows for (slight) violations of the latency constraints … This paper proposes a new approximation algorithm for the offline Virtual Network Embedding Problem (VNEP) with latency constraints. Our approximation algorithm Flex allows for (slight) violations of the latency constraints in order to greatly lower the runtime. It relies on a reduction to the Restricted Shortest Path Problem (RSP) and leverages a classic result by Goel et al. We complement our formal analysis with a simulation study demonstrating our algorithm's computational benefits. Our results generalize to any other additive edge metric, as e.g., hop count or even packet loss probability.
We provide nearly optimal algorithms for online facility location (OFL) with predictions. In OFL, $n$ demand points arrive in order and the algorithm must irrevocably assign each demand point to … We provide nearly optimal algorithms for online facility location (OFL) with predictions. In OFL, $n$ demand points arrive in order and the algorithm must irrevocably assign each demand point to an open facility upon its arrival. The objective is to minimize the total connection costs from demand points to assigned facilities plus the facility opening cost. We further assume the algorithm is additionally given for each demand point $x_i$ a natural prediction $f_{x_i}^{\mathrm{pred}}$ which is supposed to be the facility $f_{x_i}^{\mathrm{opt}}$ that serves $x_i$ in the offline optimal solution. Our main result is an $O(\min\{\log {\frac{n\eta_\infty}{\mathrm{OPT}}}, \log{n} \})$-competitive algorithm where $\eta_\infty$ is the maximum prediction error (i.e., the distance between $f_{x_i}^{\mathrm{pred}}$ and $f_{x_i}^{\mathrm{opt}}$). Our algorithm overcomes the fundamental $\Omega(\frac{\log n}{\log \log n})$ lower bound of OFL (without predictions) when $\eta_\infty$ is small, and it still maintains $O(\log n)$ ratio even when $\eta_\infty$ is unbounded. Furthermore, our theoretical analysis is supported by empirical evaluations for the tradeoffs between $\eta_\infty$ and the competitive ratio on various real datasets of different types.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Weighted Matching with a SampleHaim Kaplan, David Naori, and Danny RazHaim Kaplan, David Naori, and Danny Razpp.1247 - 1272Chapter DOI:https://doi.org/10.1137/1.9781611977073.52PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pál for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Graph Algorithms with PredictionsYossi Azar, Debmalya Panigrahi, and Noam TouitouYossi Azar, Debmalya Panigrahi, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Online Graph Algorithms with PredictionsYossi Azar, Debmalya Panigrahi, and Noam TouitouYossi Azar, Debmalya Panigrahi, and Noam Touitoupp.35 - 66Chapter DOI:https://doi.org/10.1137/1.9781611977073.3PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Online algorithms with predictions is a popular and elegant framework for bypassing pessimistic lower bounds in competitive analysis. In this model, online algorithms are supplied with future predictions and the goal is for the competitive ratio to smoothly interpolate between the best offline and online bounds as a function of the prediction error. In this paper, we study online graph problems with predictions. Our contributions are the following: The first question is defining prediction error. For graph/metric problems, there can be two types of error, locations that are not predicted, and locations that are predicted but the predicted and actual locations do not coincide exactly. We design a novel definition of prediction error called metric error with outliers to simultaneously capture both types of errors, which thereby generalizes previous definitions of error that only capture one of the two error types. We give a general framework for obtaining online algorithms with predictions that combines, in a "black box" fashion, existing online and offline algorithms, under certain technical conditions. To the best of our knowledge, this is the first general-purpose tool for obtaining online algorithms with predictions. Using our framework, we obtain tight bounds on the competitive ratio of several classical graph problems as a function of metric error with outliers: Steiner tree, Steiner forest, priority Steiner tree/forest, and uncapacitated/capacitated facility location. Both the definition of metric error with outliers and the general framework for combining offline and online algorithms are not specific to the problems that we consider in this paper. We hope that these will be useful for future work on other problems in this domain. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Robust Secretary and Prophet Algorithms for Packing Integer ProgramsC.J. Argue, Anupam Gupta, Marco Molinaro, … Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Robust Secretary and Prophet Algorithms for Packing Integer ProgramsC.J. Argue, Anupam Gupta, Marco Molinaro, and Sahil SinglaC.J. Argue, Anupam Gupta, Marco Molinaro, and Sahil Singlapp.1273 - 1297Chapter DOI:https://doi.org/10.1137/1.9781611977073.53PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the problem of solving Packing Integer Programs (PIPs) in the online setting, where columns in [0, 1]d of the constraint matrix are revealed sequentially, and the goal is to pick a subset of the columns that sum to at most B in each coordinate while maximizing the objective. Excellent results are known in the secretary setting, where the columns are adversarially chosen, but presented in a uniformly random order. However, these existing algorithms are susceptible to adversarial attacks: they try to "learn" characteristics of a good solution, but tend to over-fit to the model, and hence a small number of adversarial corruptions can cause the algorithm to fail. In this paper, we give the first robust algorithms for Packing Integer Programs, specifically in the recently proposed Byzantine Secretary framework [BGSZ20]. Our techniques are based on a two-level use of online learning, to robustly learn an approximation to the optimal value, and then to use this robust estimate to pick a good solution. These techniques are general and we use them to design robust algorithms for PIPs in the prophet model as well, specifically in the Prophet-with-Augmentations framework [ISW20]. We also improve known results in the Byzantine Secretary framework: we make the non-constructive results algorithmic and improve the existing bounds for single-item and matroid constraints. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Following the research agenda initiated by Munoz & Vassilvitskii [1] and Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for classical online optimization problems, in this work, we consider the … Following the research agenda initiated by Munoz & Vassilvitskii [1] and Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for classical online optimization problems, in this work, we consider the Online Facility Location problem under this framework. In Online Facility Location (OFL), demands arrive one-by-one in a metric space and must be (irrevocably) assigned to an open facility upon arrival, without any knowledge about future demands. We present an online algorithm for OFL that exploits potentially imperfect predictions on the locations of the optimal facilities. We prove that the competitive ratio decreases smoothly from sublogarithmic in the number of demands to constant, as the error, i.e., the total distance of the predicted locations to the optimal facility locations, decreases towards zero. We complement our analysis with a matching lower bound establishing that the dependence of the algorithm's competitive ratio on the error is optimal, up to constant factors. Finally, we evaluate our algorithm on real world data and compare our learning augmented approach with the current best online algorithm for the problem.