Author Description

Login to generate an author description

Ask a Question About This Mathematician

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both … Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub ( https://covid19forecasthub.org/ ) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.
Abstract Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease … Abstract Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.
Abstract Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and … Abstract Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub ( https://covid19forecasthub.org/ ) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multi-model ensemble forecast that combined predictions from dozens of different research groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-week horizon 3-5 times larger than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks. Significance Statement This paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the US. Results show high variation in accuracy between and within stand-alone models, and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public health action.
A multitude of forecasting efforts have arisen to support management of the ongoing COVID-19 epidemic. These efforts typically rely on a variant of the SIR process and have illustrated that … A multitude of forecasting efforts have arisen to support management of the ongoing COVID-19 epidemic. These efforts typically rely on a variant of the SIR process and have illustrated that building effective forecasts for an epidemic in its early stages is challenging. This is perhaps surprising since these models rely on a small number of parameters and typically provide an excellent retrospective fit to the evolution of a disease. So motivated, we provide an analysis of the limits to estimating an SIR process. We show that no unbiased estimator can hope to learn this process until observing enough of the epidemic so that one is approximately two-thirds of the way to reaching the peak for new infections. Our analysis provides insight into a regularization strategy that permits effective learning across simultaneously and asynchronously evolving epidemics. This strategy has been used to produce accurate, granular predictions for the COVID-19 epidemic that has found large-scale practical application in a large US state.
Exploration is often necessary in online learning to maximize long-term reward, but it comes at the cost of short-term 'regret'. We study how this cost of exploration is shared across … Exploration is often necessary in online learning to maximize long-term reward, but it comes at the cost of short-term 'regret'. We study how this cost of exploration is shared across multiple groups. For example, in a clinical trial setting, patients who are assigned a sub-optimal treatment effectively incur the cost of exploration. When patients are associated with natural groups on the basis of, say, race or age, it is natural to ask whether the cost of exploration borne by any single group is 'fair'. So motivated, we introduce the 'grouped' bandit model. We leverage the theory of axiomatic bargaining, and the Nash bargaining solution in particular, to formalize what might constitute a fair division of the cost of exploration across groups. On the one hand, we show that any regret-optimal policy strikingly results in the least fair outcome: such policies will perversely leverage the most 'disadvantaged' groups when they can. More constructively, we derive policies that are optimally fair and simultaneously enjoy a small 'price of fairness'. We illustrate the relative merits of our algorithmic framework with a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups.
This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models which seek to explain the diffusion of an epidemic in a network. The Susceptible-Infected-Recovered … This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models which seek to explain the diffusion of an epidemic in a network. The Susceptible-Infected-Recovered (SIR) model is a classic example, proposed nearly a century ago [2]. The SIR model remains a cornerstone for the forecasting of epidemics. The so-called Bass model [1] remains a basic building block in forecasting consumer adoption of new products and services. The durability of these models arises from the fact that they have shown an excellent fit to data, in numerous studies spanning both the epidemiology and marketing literatures. Somewhat paradoxically, using these same models as reliable forecasting tools presents a challenge.
We consider prophet inequalities in a setting where agents correspond to both elements in a matroid and vertices in a graph. A set of agents is feasible if they form … We consider prophet inequalities in a setting where agents correspond to both elements in a matroid and vertices in a graph. A set of agents is feasible if they form both an independent set in the matroid and an independent set in the graph. Our main result is an ex-ante 1/(2d+2)-prophet inequality, where d is a graph parameter upper-bounded by the maximum size of an independent set in the neighborhood of any vertex. We establish this result through a framework that sets both dynamic prices for elements in the matroid (using the method of balanced thresholds), and static but discriminatory prices for vertices in the graph (motivated by recent developments in approximate dynamic programming). The threshold for accepting an agent is then the sum of these two prices. We show that for graphs induced by a certain family of interval-scheduling constraints, the value of d is 1. Our framework thus provides the first constant-factor prophet inequality when there are both matroid-independence constraints and interval-scheduling constraints. It also unifies and improves several results from the literature, leading to a 1/2-prophet inequality when agents have XOS valuation functions over a set of items and use them for a finite interval duration, and more generally, a 1/(d+1)-prophet inequality when these items each require a bundle of d resources to procure.
Exploration is often necessary in online learning to maximize long-term rewards, but it comes at the cost of short-term “regret.” We study how this cost of exploration is shared across … Exploration is often necessary in online learning to maximize long-term rewards, but it comes at the cost of short-term “regret.” We study how this cost of exploration is shared across multiple groups. For example, in a clinical trial setting, patients who are assigned a suboptimal treatment effectively incur the cost of exploration. When patients are associated with natural groups on the basis of, say, race or age, it is natural to ask whether the cost of exploration borne by any single group is “fair.” So motivated, we introduce the “grouped” bandit model. We leverage the theory of axiomatic bargaining, and the Nash bargaining solution in particular, to formalize what might constitute a fair division of the cost of exploration across groups. On one hand, we show that any regret-optimal policy strikingly results in the least fair outcome: such policies will perversely leverage the most “disadvantaged” groups when they can. More constructively, we derive policies that are optimally fair and simultaneously enjoy a small “price of fairness.” We illustrate the relative merits of our algorithmic framework with a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups. This paper was accepted by David Simchi-Levi, data science. Funding: This work was supported by the National Science Foundation, Division of Civil, Mechanical and Manufacturing Innovation [Grant 1727239]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.01985 .
This paper studies a dynamic model of interactions between a firm and job applicants to understand long-term hiring outcomes. In each round, the firm decides which applicants to hire, where … This paper studies a dynamic model of interactions between a firm and job applicants to understand long-term hiring outcomes. In each round, the firm decides which applicants to hire, where the firm’s ability to evaluate each applicant is imperfect. Each applicant belongs to a group, and central to our model is the concept that the firm’s evaluation ability is influenced by its past hiring decisions with the applicant’s group. The dynamic nature of our model enables us to characterize the steady-state outcome of this process, which then reveals a mechanism of why disparities between groups may emerge. Specifically, we show that two groups of applicants with identical underlying skill distributions may converge to completely different long-term hiring outcomes, and this disparity is fueled by differences in the firm’s initial ability to evaluate applicants of each group. This demonstrates that a minor initial disadvantage can amplify over time through a feedback loop, leading to persistent and exacerbating disparities even when firms make rational hiring decisions. Finally, we study the effectiveness of short-term interventions that aim to improve long-term hiring outcomes.
Thompson sampling has become a ubiquitous approach to online decision problems with bandit feedback. The key algorithmic task for Thompson sampling is drawing a sample from the posterior of the … Thompson sampling has become a ubiquitous approach to online decision problems with bandit feedback. The key algorithmic task for Thompson sampling is drawing a sample from the posterior of the optimal action. We propose an alternative arm selection rule we dub TS-UCB, that requires negligible additional computational effort but provides significant performance improvements relative to Thompson sampling. At each step, TS-UCB computes a score for each arm using two ingredients: posterior sample(s) and upper confidence bounds. TS-UCB can be used in any setting where these two quantities are available, and it is flexible in the number of posterior samples it takes as input. TS-UCB achieves materially lower regret on a comprehensive suite of synthetic and real-world datasets, including a personalized article recommendation dataset from Yahoo! and a suite of benchmark datasets from a deep bandit suite proposed in Riquelme et al. (2018). Finally, from a theoretical perspective, we establish optimal regret guarantees for TS-UCB for both the K-armed and linear bandit models.
Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes, through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for … Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes, through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for patients to maximize a long-term outcome, where interventions are costly and capacity-constrained. We assume there exists a dataset collected from an initial pilot study that we can leverage. We present a new approach for this problem that we dub DecompPI, which approximates one step of policy iteration. Implementing DecompPI simply consists of a prediction task using the dataset, alleviating the need for online experimentation. DecompPI is a generic model-free algorithm that can be used irrespective of the underlying patient behavior model. We derive theoretical guarantees on a simple, special case of the model that is representative of our problem setting. We establish an approximation ratio for DecompPI with respect to the improvement beyond a null policy that does not allocate interventions. Specifically, when the initial policy used to collect the data is randomized, the approximation ratio of the improvement approaches 1/2 as the intervention capacity of the initial policy decreases. We show that this guarantee is robust to estimation errors. We conduct a rigorous empirical case study using real-world data from a mobile health platform for improving treatment adherence for tuberculosis. Using a validated simulation model, we demonstrate that DecompPI can provide the same efficacy as the status quo approach with approximately half the capacity of interventions. DecompPI is simple and easy to implement for organizations aiming to improve long-term behavior through targeted interventions, and this paper demonstrates its strong performance both theoretically and empirically.
We study a model of social learning from reviews where customers are computationally limited and make purchases based on reading only the first few reviews displayed by the platform. Under … We study a model of social learning from reviews where customers are computationally limited and make purchases based on reading only the first few reviews displayed by the platform. Under this bounded rationality, we establish that the review ordering policy can have a significant impact. In particular, the popular Newest First ordering induces a negative review to persist as the most recent review longer than a positive review. This phenomenon, which we term the Cost of Newest First, can make the long-term revenue unboundedly lower than a counterpart where reviews are exogenously drawn for each customer. We show that the impact of the Cost of Newest First can be mitigated under dynamic pricing, which allows the price to depend on the set of displayed reviews. Under the optimal dynamic pricing policy, the revenue loss is at most a factor of 2. On the way, we identify a structural property for this optimal dynamic pricing: the prices should ensure that the probability of a purchase is always the same, regardless of the state of reviews. We also study an extension of the model where customers put more weight on more recent reviews (and discount older reviews based on their time of posting), and we show that Newest First is still not the optimal ordering policy if customers discount slowly. Lastly, we corroborate our theoretical findings using a real-world review dataset. We find that the average rating of the first page of reviews is statistically significantly smaller than the overall average rating, which is in line with our theoretical results.
This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models, including the Bass model (used in modeling consumer adoption) and the SIR model (used … This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models, including the Bass model (used in modeling consumer adoption) and the SIR model (used in modeling epidemics). We show that one cannot hope to learn such models until quite late in the diffusion. Specifically, we show that the time required to collect a number of observations that exceeds our sample complexity lower bounds is large. For Bass models with low innovation rates, our results imply that one cannot hope to predict the eventual number of adopting customers until one is at least two-thirds of the way to the time at which the rate of new adopters is at its peak. In a similar vein, our results imply that in the case of an SIR model, one cannot hope to predict the eventual number of infections until one is approximately two-thirds of the way to the time at which the infection rate has peaked. This lower bound in estimation further translates into a lower bound in regret for decision-making in epidemic interventions. Our results formalize the challenge of accurate forecasting and highlight the importance of incorporating additional data sources. To this end, we analyze the benefit of a seroprevalence study in an epidemic, where we characterize the size of the study needed to improve SIR model estimation. Extensive empirical analyses on product adoption and epidemic data support our theoretical findings.
We study online weighted bipartite matching of reusable resources where an adversarial sequence of requests for resources arrive over time. A resource that is matched is 'used' for a random … We study online weighted bipartite matching of reusable resources where an adversarial sequence of requests for resources arrive over time. A resource that is matched is 'used' for a random duration, drawn independently from a resource-dependent distribution, after which it returns and is able to be matched again. We study the performance of the greedy policy, which matches requests to the resource that yields the highest reward. Previously, it was known that the greedy policy is 1/2 competitive against a clairvoyant benchmark that knows the request sequence in advance. In this work, we improve this result by introducing a parameter that quantifies the degree of reusability of the resources. Specifically, if p represents the smallest probability over the usage distributions that a matched resource returns in one time step, the greedy policy achieves a competitive ratio of $1/(2-p)$. Furthermore, when the usage distributions are geometric, we establish a stronger competitive ratio of $(1+p)/2$, which we demonstrate to be tight. Both of these results align with the known results in the two extreme scenarios: p = 0 corresponds to non-reusable resources, where 1/2 is known to be tight, while p = 1 corresponds to every resource returning immediately, where greedy is the optimal policy and hence the competitive ratio is 1. Finally, we show that both results are robust to approximations of the greedy policy. Our work demonstrates that the reusability of resources can enhance performance compared to the non-reusable setting, and that a simple greedy policy suffices when the degree of reusability is high. Our insights contribute to the understanding of how resource reusability can influence the performance of online algorithms, and highlight the potential for improved performance as the degree of reusability increases.
We study the fairness of dimensionality reduction methods for recommendations. We focus on the established method of principal component analysis (PCA), which identifies latent components and produces a low-rank approximation … We study the fairness of dimensionality reduction methods for recommendations. We focus on the established method of principal component analysis (PCA), which identifies latent components and produces a low-rank approximation via the leading components while discarding the trailing components. Prior works have defined notions of "fair PCA"; however, these definitions do not answer the following question: what makes PCA unfair? We identify two underlying mechanisms of PCA that induce unfairness at the item level. The first negatively impacts less popular items, due to the fact that less popular items rely on trailing latent components to recover their values. The second negatively impacts the highly popular items, since the leading PCA components specialize in individual popular items instead of capturing similarities between items. To address these issues, we develop a polynomial-time algorithm, Item-Weighted PCA, a modification of PCA that uses item-specific weights in the objective. On a stylized class of matrices, we prove that Item-Weighted PCA using a specific set of weights minimizes a popularity-normalized error metric. Our evaluations on real-world datasets show that Item-Weighted PCA not only improves overall recommendation quality by up to $0.1$ item-level AUC-ROC but also improves on both popular and less popular items.
Problem definition: Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes through education, motivation, reminders, and outreach. We study the problem of optimizing personalized … Problem definition: Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for patients to maximize a long-term outcome, in which interventions are costly and capacity constrained. We assume we have access to a historical data set collected from an initial pilot study. Methodology/results: We present a new approach for this problem that we dub [Formula: see text], which decomposes the state space for a system of patients to the individual level and then approximates one step of policy iteration. Implementing [Formula: see text] simply consists of a prediction task using the data set, alleviating the need for online experimentation. [Formula: see text] is a generic, model-free algorithm that can be used irrespective of the underlying patient behavior model. We derive theoretical guarantees on a simple, special case of the model that is representative of our problem setting. When the initial policy used to collect the data is randomized, we establish an approximation guarantee for [Formula: see text] with respect to the improvement beyond a null policy that does not allocate interventions. We show that this guarantee is robust to estimation errors. We then conduct a rigorous empirical case study using real-world data from a mobile health platform for improving treatment adherence for tuberculosis. Using a validated simulation model, we demonstrate that [Formula: see text] can provide the same efficacy as the status quo approach with approximately half the capacity of interventions. Managerial implications: [Formula: see text] is simple and easy to implement for an organization aiming to improve long-term behavior through targeted interventions, and this paper demonstrates its strong performance both theoretically and empirically, particularly in resource-limited settings. Funding: The authors are grateful for financial research support from the MIT Sloan Health Systems Initiative. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0548 .
Problem definition: Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes through education, motivation, reminders, and outreach. We study the problem of optimizing personalized … Problem definition: Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for patients to maximize a long-term outcome, in which interventions are costly and capacity constrained. We assume we have access to a historical data set collected from an initial pilot study. Methodology/results: We present a new approach for this problem that we dub [Formula: see text], which decomposes the state space for a system of patients to the individual level and then approximates one step of policy iteration. Implementing [Formula: see text] simply consists of a prediction task using the data set, alleviating the need for online experimentation. [Formula: see text] is a generic, model-free algorithm that can be used irrespective of the underlying patient behavior model. We derive theoretical guarantees on a simple, special case of the model that is representative of our problem setting. When the initial policy used to collect the data is randomized, we establish an approximation guarantee for [Formula: see text] with respect to the improvement beyond a null policy that does not allocate interventions. We show that this guarantee is robust to estimation errors. We then conduct a rigorous empirical case study using real-world data from a mobile health platform for improving treatment adherence for tuberculosis. Using a validated simulation model, we demonstrate that [Formula: see text] can provide the same efficacy as the status quo approach with approximately half the capacity of interventions. Managerial implications: [Formula: see text] is simple and easy to implement for an organization aiming to improve long-term behavior through targeted interventions, and this paper demonstrates its strong performance both theoretically and empirically, particularly in resource-limited settings. Funding: The authors are grateful for financial research support from the MIT Sloan Health Systems Initiative. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0548 .
We study a model of social learning from reviews where customers are computationally limited and make purchases based on reading only the first few reviews displayed by the platform. Under … We study a model of social learning from reviews where customers are computationally limited and make purchases based on reading only the first few reviews displayed by the platform. Under this bounded rationality, we establish that the review ordering policy can have a significant impact. In particular, the popular Newest First ordering induces a negative review to persist as the most recent review longer than a positive review. This phenomenon, which we term the Cost of Newest First, can make the long-term revenue unboundedly lower than a counterpart where reviews are exogenously drawn for each customer. We show that the impact of the Cost of Newest First can be mitigated under dynamic pricing, which allows the price to depend on the set of displayed reviews. Under the optimal dynamic pricing policy, the revenue loss is at most a factor of 2. On the way, we identify a structural property for this optimal dynamic pricing: the prices should ensure that the probability of a purchase is always the same, regardless of the state of reviews. We also study an extension of the model where customers put more weight on more recent reviews (and discount older reviews based on their time of posting), and we show that Newest First is still not the optimal ordering policy if customers discount slowly. Lastly, we corroborate our theoretical findings using a real-world review dataset. We find that the average rating of the first page of reviews is statistically significantly smaller than the overall average rating, which is in line with our theoretical results.
Exploration is often necessary in online learning to maximize long-term rewards, but it comes at the cost of short-term “regret.” We study how this cost of exploration is shared across … Exploration is often necessary in online learning to maximize long-term rewards, but it comes at the cost of short-term “regret.” We study how this cost of exploration is shared across multiple groups. For example, in a clinical trial setting, patients who are assigned a suboptimal treatment effectively incur the cost of exploration. When patients are associated with natural groups on the basis of, say, race or age, it is natural to ask whether the cost of exploration borne by any single group is “fair.” So motivated, we introduce the “grouped” bandit model. We leverage the theory of axiomatic bargaining, and the Nash bargaining solution in particular, to formalize what might constitute a fair division of the cost of exploration across groups. On one hand, we show that any regret-optimal policy strikingly results in the least fair outcome: such policies will perversely leverage the most “disadvantaged” groups when they can. More constructively, we derive policies that are optimally fair and simultaneously enjoy a small “price of fairness.” We illustrate the relative merits of our algorithmic framework with a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups. This paper was accepted by David Simchi-Levi, data science. Funding: This work was supported by the National Science Foundation, Division of Civil, Mechanical and Manufacturing Innovation [Grant 1727239]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.01985 .
Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes, through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for … Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes, through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for patients to maximize a long-term outcome, where interventions are costly and capacity-constrained. We assume there exists a dataset collected from an initial pilot study that we can leverage. We present a new approach for this problem that we dub DecompPI, which approximates one step of policy iteration. Implementing DecompPI simply consists of a prediction task using the dataset, alleviating the need for online experimentation. DecompPI is a generic model-free algorithm that can be used irrespective of the underlying patient behavior model. We derive theoretical guarantees on a simple, special case of the model that is representative of our problem setting. We establish an approximation ratio for DecompPI with respect to the improvement beyond a null policy that does not allocate interventions. Specifically, when the initial policy used to collect the data is randomized, the approximation ratio of the improvement approaches 1/2 as the intervention capacity of the initial policy decreases. We show that this guarantee is robust to estimation errors. We conduct a rigorous empirical case study using real-world data from a mobile health platform for improving treatment adherence for tuberculosis. Using a validated simulation model, we demonstrate that DecompPI can provide the same efficacy as the status quo approach with approximately half the capacity of interventions. DecompPI is simple and easy to implement for organizations aiming to improve long-term behavior through targeted interventions, and this paper demonstrates its strong performance both theoretically and empirically.
We study online weighted bipartite matching of reusable resources where an adversarial sequence of requests for resources arrive over time. A resource that is matched is 'used' for a random … We study online weighted bipartite matching of reusable resources where an adversarial sequence of requests for resources arrive over time. A resource that is matched is 'used' for a random duration, drawn independently from a resource-dependent distribution, after which it returns and is able to be matched again. We study the performance of the greedy policy, which matches requests to the resource that yields the highest reward. Previously, it was known that the greedy policy is 1/2 competitive against a clairvoyant benchmark that knows the request sequence in advance. In this work, we improve this result by introducing a parameter that quantifies the degree of reusability of the resources. Specifically, if p represents the smallest probability over the usage distributions that a matched resource returns in one time step, the greedy policy achieves a competitive ratio of $1/(2-p)$. Furthermore, when the usage distributions are geometric, we establish a stronger competitive ratio of $(1+p)/2$, which we demonstrate to be tight. Both of these results align with the known results in the two extreme scenarios: p = 0 corresponds to non-reusable resources, where 1/2 is known to be tight, while p = 1 corresponds to every resource returning immediately, where greedy is the optimal policy and hence the competitive ratio is 1. Finally, we show that both results are robust to approximations of the greedy policy. Our work demonstrates that the reusability of resources can enhance performance compared to the non-reusable setting, and that a simple greedy policy suffices when the degree of reusability is high. Our insights contribute to the understanding of how resource reusability can influence the performance of online algorithms, and highlight the potential for improved performance as the degree of reusability increases.
We study the fairness of dimensionality reduction methods for recommendations. We focus on the established method of principal component analysis (PCA), which identifies latent components and produces a low-rank approximation … We study the fairness of dimensionality reduction methods for recommendations. We focus on the established method of principal component analysis (PCA), which identifies latent components and produces a low-rank approximation via the leading components while discarding the trailing components. Prior works have defined notions of "fair PCA"; however, these definitions do not answer the following question: what makes PCA unfair? We identify two underlying mechanisms of PCA that induce unfairness at the item level. The first negatively impacts less popular items, due to the fact that less popular items rely on trailing latent components to recover their values. The second negatively impacts the highly popular items, since the leading PCA components specialize in individual popular items instead of capturing similarities between items. To address these issues, we develop a polynomial-time algorithm, Item-Weighted PCA, a modification of PCA that uses item-specific weights in the objective. On a stylized class of matrices, we prove that Item-Weighted PCA using a specific set of weights minimizes a popularity-normalized error metric. Our evaluations on real-world datasets show that Item-Weighted PCA not only improves overall recommendation quality by up to $0.1$ item-level AUC-ROC but also improves on both popular and less popular items.
This paper studies a dynamic model of interactions between a firm and job applicants to understand long-term hiring outcomes. In each round, the firm decides which applicants to hire, where … This paper studies a dynamic model of interactions between a firm and job applicants to understand long-term hiring outcomes. In each round, the firm decides which applicants to hire, where the firm’s ability to evaluate each applicant is imperfect. Each applicant belongs to a group, and central to our model is the concept that the firm’s evaluation ability is influenced by its past hiring decisions with the applicant’s group. The dynamic nature of our model enables us to characterize the steady-state outcome of this process, which then reveals a mechanism of why disparities between groups may emerge. Specifically, we show that two groups of applicants with identical underlying skill distributions may converge to completely different long-term hiring outcomes, and this disparity is fueled by differences in the firm’s initial ability to evaluate applicants of each group. This demonstrates that a minor initial disadvantage can amplify over time through a feedback loop, leading to persistent and exacerbating disparities even when firms make rational hiring decisions. Finally, we study the effectiveness of short-term interventions that aim to improve long-term hiring outcomes.
Abstract Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease … Abstract Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both … Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub ( https://covid19forecasthub.org/ ) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.
This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models which seek to explain the diffusion of an epidemic in a network. The Susceptible-Infected-Recovered … This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models which seek to explain the diffusion of an epidemic in a network. The Susceptible-Infected-Recovered (SIR) model is a classic example, proposed nearly a century ago [2]. The SIR model remains a cornerstone for the forecasting of epidemics. The so-called Bass model [1] remains a basic building block in forecasting consumer adoption of new products and services. The durability of these models arises from the fact that they have shown an excellent fit to data, in numerous studies spanning both the epidemiology and marketing literatures. Somewhat paradoxically, using these same models as reliable forecasting tools presents a challenge.
Abstract Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and … Abstract Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub ( https://covid19forecasthub.org/ ) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multi-model ensemble forecast that combined predictions from dozens of different research groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-week horizon 3-5 times larger than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks. Significance Statement This paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the US. Results show high variation in accuracy between and within stand-alone models, and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public health action.
Exploration is often necessary in online learning to maximize long-term reward, but it comes at the cost of short-term 'regret'. We study how this cost of exploration is shared across … Exploration is often necessary in online learning to maximize long-term reward, but it comes at the cost of short-term 'regret'. We study how this cost of exploration is shared across multiple groups. For example, in a clinical trial setting, patients who are assigned a sub-optimal treatment effectively incur the cost of exploration. When patients are associated with natural groups on the basis of, say, race or age, it is natural to ask whether the cost of exploration borne by any single group is 'fair'. So motivated, we introduce the 'grouped' bandit model. We leverage the theory of axiomatic bargaining, and the Nash bargaining solution in particular, to formalize what might constitute a fair division of the cost of exploration across groups. On the one hand, we show that any regret-optimal policy strikingly results in the least fair outcome: such policies will perversely leverage the most 'disadvantaged' groups when they can. More constructively, we derive policies that are optimally fair and simultaneously enjoy a small 'price of fairness'. We illustrate the relative merits of our algorithmic framework with a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups.
A multitude of forecasting efforts have arisen to support management of the ongoing COVID-19 epidemic. These efforts typically rely on a variant of the SIR process and have illustrated that … A multitude of forecasting efforts have arisen to support management of the ongoing COVID-19 epidemic. These efforts typically rely on a variant of the SIR process and have illustrated that building effective forecasts for an epidemic in its early stages is challenging. This is perhaps surprising since these models rely on a small number of parameters and typically provide an excellent retrospective fit to the evolution of a disease. So motivated, we provide an analysis of the limits to estimating an SIR process. We show that no unbiased estimator can hope to learn this process until observing enough of the epidemic so that one is approximately two-thirds of the way to reaching the peak for new infections. Our analysis provides insight into a regularization strategy that permits effective learning across simultaneously and asynchronously evolving epidemics. This strategy has been used to produce accurate, granular predictions for the COVID-19 epidemic that has found large-scale practical application in a large US state.
Thompson sampling has become a ubiquitous approach to online decision problems with bandit feedback. The key algorithmic task for Thompson sampling is drawing a sample from the posterior of the … Thompson sampling has become a ubiquitous approach to online decision problems with bandit feedback. The key algorithmic task for Thompson sampling is drawing a sample from the posterior of the optimal action. We propose an alternative arm selection rule we dub TS-UCB, that requires negligible additional computational effort but provides significant performance improvements relative to Thompson sampling. At each step, TS-UCB computes a score for each arm using two ingredients: posterior sample(s) and upper confidence bounds. TS-UCB can be used in any setting where these two quantities are available, and it is flexible in the number of posterior samples it takes as input. TS-UCB achieves materially lower regret on a comprehensive suite of synthetic and real-world datasets, including a personalized article recommendation dataset from Yahoo! and a suite of benchmark datasets from a deep bandit suite proposed in Riquelme et al. (2018). Finally, from a theoretical perspective, we establish optimal regret guarantees for TS-UCB for both the K-armed and linear bandit models.
This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models, including the Bass model (used in modeling consumer adoption) and the SIR model (used … This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models, including the Bass model (used in modeling consumer adoption) and the SIR model (used in modeling epidemics). We show that one cannot hope to learn such models until quite late in the diffusion. Specifically, we show that the time required to collect a number of observations that exceeds our sample complexity lower bounds is large. For Bass models with low innovation rates, our results imply that one cannot hope to predict the eventual number of adopting customers until one is at least two-thirds of the way to the time at which the rate of new adopters is at its peak. In a similar vein, our results imply that in the case of an SIR model, one cannot hope to predict the eventual number of infections until one is approximately two-thirds of the way to the time at which the infection rate has peaked. This lower bound in estimation further translates into a lower bound in regret for decision-making in epidemic interventions. Our results formalize the challenge of accurate forecasting and highlight the importance of incorporating additional data sources. To this end, we analyze the benefit of a seroprevalence study in an epidemic, where we characterize the size of the study needed to improve SIR model estimation. Extensive empirical analyses on product adoption and epidemic data support our theoretical findings.
We consider prophet inequalities in a setting where agents correspond to both elements in a matroid and vertices in a graph. A set of agents is feasible if they form … We consider prophet inequalities in a setting where agents correspond to both elements in a matroid and vertices in a graph. A set of agents is feasible if they form both an independent set in the matroid and an independent set in the graph. Our main result is an ex-ante 1/(2d+2)-prophet inequality, where d is a graph parameter upper-bounded by the maximum size of an independent set in the neighborhood of any vertex. We establish this result through a framework that sets both dynamic prices for elements in the matroid (using the method of balanced thresholds), and static but discriminatory prices for vertices in the graph (motivated by recent developments in approximate dynamic programming). The threshold for accepting an agent is then the sum of these two prices. We show that for graphs induced by a certain family of interval-scheduling constraints, the value of d is 1. Our framework thus provides the first constant-factor prophet inequality when there are both matroid-independence constraints and interval-scheduling constraints. It also unifies and improves several results from the literature, leading to a 1/2-prophet inequality when agents have XOS valuation functions over a set of items and use them for a finite interval duration, and more generally, a 1/(d+1)-prophet inequality when these items each require a bundle of d resources to procure.