Decision Sciences Management Science and Operations Research

Forecasting Techniques and Applications

Description

This cluster of papers focuses on advances in time series forecasting methods, including topics such as exponential smoothing, machine learning techniques, expert judgment, inventory management, demand forecasting, neural networks, and forecast combination.

Keywords

Forecasting; Time Series; Exponential Smoothing; Machine Learning; Expert Judgment; Inventory Management; Demand Forecasting; Neural Networks; Prediction Accuracy; Forecast Combination

Abstract. An approach to smoothing and forecasting for time series with missing observations is proposed. For an underlying state‐space model, the EM algorithm is used in conjunction with the conventional … Abstract. An approach to smoothing and forecasting for time series with missing observations is proposed. For an underlying state‐space model, the EM algorithm is used in conjunction with the conventional Kalman smoothed estimators to derive a simple recursive procedure for estimating the parameters by maximum likelihood. An example is given which involves smoothing and forecasting an economic series using the maximum likelihood estimators for the parameters.
(2007). Generalized Additive Models: An Introduction With R. Technometrics: Vol. 49, No. 3, pp. 360-361. (2007). Generalized Additive Models: An Introduction With R. Technometrics: Vol. 49, No. 3, pp. 360-361.
One of the main themes that has emerged from behavioral decision research during the past two decades is the view that people's preferences are often constructed-not merely revealed-in the process … One of the main themes that has emerged from behavioral decision research during the past two decades is the view that people's preferences are often constructed-not merely revealed-in the process of elicitation.This conception is derived in part from studies demonstrating that normatively equivalent methods of elicitation often give rise to systematically different responses.These "preference reversals" violate the principle of procedure invariance fundamental to theories of rational choice and raise difficult questions about the nature of human values.If different elicitation procedures produce different orderings of options, how can preferences be defined and in what sense do they exist?Describing and explaining such failures of invariance will require choice models of far greater complexity than the traditional models.
Originally published in 1959, this classic volume has had a major impact on generations of statisticians. Newly issued in the Wiley Classics Series, the book examines the basic theory of … Originally published in 1959, this classic volume has had a major impact on generations of statisticians. Newly issued in the Wiley Classics Series, the book examines the basic theory of analysis of variance by considering several different mathematical models. Part I looks at the theory of fixed-effects models with independent observations of equal variance, while Part II begins to explore the analysis of variance in the case of other models.
Our website uses cookies to enhance your experience. By continuing to use our site, or clicking "Continue," you are agreeing to our Cookie Policy | Continue Our website uses cookies to enhance your experience. By continuing to use our site, or clicking "Continue," you are agreeing to our Cookie Policy | Continue
Abstract This article offers a practical guide to goodness-of-fit tests using statistics based on the empirical distribution function (EDF). Five of the leading statistics are examined—those often labelled D, W … Abstract This article offers a practical guide to goodness-of-fit tests using statistics based on the empirical distribution function (EDF). Five of the leading statistics are examined—those often labelled D, W 2, V, U 2, A 2—and three important situations: where the hypothesized distribution F(x) is completely specified and where F(x) represents the normal or exponential distribution with one or more parameters to be estimated from the data. EDF statistics are easily calculated, and the tests require only one line of significance points for each situation. They are also shown to be competitive in terms of power.
(2003). Comparison Methods for Stochastic Models and Risks. Technometrics: Vol. 45, No. 4, pp. 370-371. (2003). Comparison Methods for Stochastic Models and Risks. Technometrics: Vol. 45, No. 4, pp. 370-371.
"Generalized Additive Models: An Introduction With R." Journal of the American Statistical Association, 102(478), pp. 760–761 "Generalized Additive Models: An Introduction With R." Journal of the American Statistical Association, 102(478), pp. 760–761
(1975). Statistical Inference Under Order Restrictions. Technometrics: Vol. 17, No. 1, pp. 139-140. (1975). Statistical Inference Under Order Restrictions. Technometrics: Vol. 17, No. 1, pp. 139-140.
PART ONE: INTRODUCTION Traditional Parametric Statistical Inference Bootstrap Statistical Inference Bootstrapping a Regression Model Theoretical Justification The Jackknife Monte Carlo Evaluation of the Bootstrap PART TWO: STATISTICAL INFERENCE USING THE … PART ONE: INTRODUCTION Traditional Parametric Statistical Inference Bootstrap Statistical Inference Bootstrapping a Regression Model Theoretical Justification The Jackknife Monte Carlo Evaluation of the Bootstrap PART TWO: STATISTICAL INFERENCE USING THE BOOTSTRAP Bias Estimation Bootstrap Confidence Intervals PART THREE: APPLICATIONS OF BOOTSTRAP CONFIDENCE INTERVALS Confidence Intervals for Statistics With Unknown Sampling Distributions Inference When Traditional Distributional Assumptions Are Violated PART FOUR: CONCLUSION Future Work Limitations of the Bootstrap Concluding Remarks
Automatic forecasts of large numbers of univariate time series are often needed in business and other contexts. We describe two automatic forecasting algorithms that have been implemented in the forecast … Automatic forecasts of large numbers of univariate time series are often needed in business and other contexts. We describe two automatic forecasting algorithms that have been implemented in the forecast package for R. The first is based on innovations state space models that underly exponential smoothing methods. The second is a step-wise algorithm for forecasting with ARIMA models. The algorithms are applicable to both seasonal and non-seasonal data, and are compared and illustrated using four real time series. We also briefly describe some of the other functionality available in the forecast package.
Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the … Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the possibility that any satisfactory results obtained may simply be due to chance rather than to any merit inherent in the method yielding the results. This problem is practically unavoidable in the analysis of time-series data, as typically only a single history measuring a given phenomenon of interest is available for analysis. It is widely acknowledged by empirical researchers that data snooping is a dangerous practice to be avoided, but in fact it is endemic. The main problem has been a lack of sufficiently simple practical methods capable of assessing the potential dangers of data snooping in a given situation. Our purpose here is to provide such methods by specifying a straightforward procedure for testing the null hypothesis that the best model encountered in a specification search has no predictive superiority over a given benchmark model. This permits data snooping to be undertaken with some degree of confidence that one will not mistake results that could have been generated by chance for genuinely good results.
Abstract In the last few decades many methods have become available for forecasting. As always, when alternatives exist, choices need to be made so that an appropriate forecasting method can … Abstract In the last few decades many methods have become available for forecasting. As always, when alternatives exist, choices need to be made so that an appropriate forecasting method can be selected and used for the specific situation being considered. This paper reports the results of a forecasting competition that provides information to facilitate such choice. Seven experts in each of the 24 methods forecasted up to 1001 series for six up to eighteen time horizons. The results of the competition are presented in this paper whose purpose is to provide empirical evidence about differences found to exist among the various extrapolative (time series) methods used in the competition.
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To … No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory's predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.
Abstract This paper discusses the applicability of statistics to a wide field of problems. Examples of simple and complex distributions are given. Abstract This paper discusses the applicability of statistics to a wide field of problems. Examples of simple and complex distributions are given.
Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly detection. Despite its importance, there are serious challenges associated with producing reliable and … Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly detection. Despite its importance, there are serious challenges associated with producing reliable and high-quality forecasts—especially when there are a variety of time series and analysts with expertise in time series modeling are relatively rare. To address these challenges, we describe a practical approach to forecasting "at scale" that combines configurable models with analyst-in-the-loop performance analysis. We propose a modular regression model with interpretable parameters that can be intuitively adjusted by analysts with domain knowledge about the time series. We describe performance analyses to compare and evaluate forecasting procedures, and automatically flag forecasts for manual review and adjustment. Tools that help analysts to use their expertise most effectively enable reliable, practical forecasting of business time series.
Preface Introduction The Normal Model Foundation of the Binomial Model Historical and Software Considerations Chapter Profiles Concepts Related to the Logistic Model 2 x 2 Table Logistic Model 2 x … Preface Introduction The Normal Model Foundation of the Binomial Model Historical and Software Considerations Chapter Profiles Concepts Related to the Logistic Model 2 x 2 Table Logistic Model 2 x k Table Logistic Model Modeling a Quantitative Predictor Logistic Modeling Designs Estimation Methods Derivation of the IRLS Algorithm IRLS Estimation Maximum Likelihood Estimation Derivation of the Binary Logistic Algorithm Terms of the Algorithm Logistic GLM and ML Algorithms Other Bernoulli Models Model Development Building a Logistic Model Assessing Model Fit: Link Specification Standardized Coefficients Standard Errors Odds Ratios as Approximations of Risk Ratios Scaling of Standard Errors Robust Variance Estimators Bootstrapped and Jackknifed Standard Errors Stepwise Methods Handling Missing Values Modeling an Uncertain Response Constraining Coefficients Interactions Introduction Binary X Binary Interactions Binary X Categorical Interactions Binary X Continuous Interactions Categorical X Continuous Interaction Thoughts about Interactions Analysis of Model Fit Traditional Fit Tests for Logistic Regression Hosmer-Lemeshow GOF Test Information Criteria Tests Residual Analysis Validation Models Binomial Logistic Regression Overdispersion Introduction The Nature and Scope of Overdispersion Binomial Overdispersion Binary Overdispersion Real Overdispersion Concluding Remarks Ordered Logistic Regression Introduction The Proportional Odds Model Generalized Ordinal Logistic Regression Partial Proportional Odds Multinomial Logistic Regression Unordered Logistic Regression Independence of Irrelevant Alternatives Comparison to Multinomial Probit Alternative Categorical Response Models Introduction Continuation Ratio Models Stereotype Logistic Model Heterogeneous Choice Logistic Model Adjacent Category Logistic Model Proportional Slopes Models Panel Models Introduction Generalized Estimating Equations Unconditional Fixed Effects Logistic Model Conditional Logistic Models Random Effects and Mixed Models Logistic Regression Other Types of Logistic-Based Models Survey Logistic Models Scobit-Skewed Logistic Regression Discriminant Analysis Exact Logistic Regression Exact Methods Alternative Modeling Methods Conclusion Appendix A: Brief Guide to Using Stata Commands Appendix B: Stata and R Logistic Models Appendix C: Greek Letters and Major Functions Appendix D: Stata Binary Logistic Command Appendix E: Derivation of the Beta-Binomial Appendix F: Likelihood Function of the Adaptive Gauss-Hermite Quadrature Method of Estimation Appendix G: Data Sets Appendix H: Marginal Effects and Discrete Change References Author Index Subject Index Exercises and R Code appear at the end of most chapters.
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged … The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. The effort of Professor Fuller is commendable . . . [the book] provides a complete treatment of an important and frequently ignored topic. Those who work with measurement error models will find it valuable. It is the fundamental book on the subject, and statisticians will benefit from adding this book to their collection or to university or departmental libraries. -Biometrics Given the large and diverse literature on measurement error/errors-in-variables problems, Fuller's book is most welcome. Anyone with an interest in the subject should certainly have this book. -Journal of the American Statistical Association The author is to be commended for providing a complete presentation of a very important topic. Statisticians working with measurement error problems will benefit from adding this book to their collection. -Technometrics . . . this book is a remarkable achievement and the product of impressive top-grade scholarly work. -Journal of Applied Econometrics Measurement Error Models offers coverage of estimation for situations where the model variables are observed subject to measurement error. Regression models are included with errors in the variables, latent variable models, and factor models. Results from several areas of application are discussed, including recent results for nonlinear models and for models with unequal variances. The estimation of true values for the fixed model, prediction of true values under the random model, model checks, and the analysis of residuals are addressed, and in addition, procedures are illustrated with data drawn from nearly twenty real data sets.
We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of … We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.
Foreword. Preface to the Second Edition. Preface to the Third Edition. Obituary. INTRODUCTION AND THE LINEAR REGRESSION MODEL. What is Econometrics? Statistical Background and Matrix Algebra. Simple Regression. *Multiple Regression. … Foreword. Preface to the Second Edition. Preface to the Third Edition. Obituary. INTRODUCTION AND THE LINEAR REGRESSION MODEL. What is Econometrics? Statistical Background and Matrix Algebra. Simple Regression. *Multiple Regression. VIOLATION OF THE ASSUMPTIONS OF THE BASIC MODEL. *Heteroskedasticity. *Autocorrelation. Multicollinearity. *Dummy Variables and Truncated Variables. Simultaneous Equations Models. Nonlinear Regression, Models of Expectations, and Nonnormality. Errors in Variables. SPECIAL TOPICS. Diagnostic Checking, Model Selection, and Specification Testing. *Introduction to Time--Series Analysis. Vector Autoregressions, Unit Roots, and Cointegration. *Panel Data Analysis. *Large--Sample Theory. *Small--Sample Inference: Resampling Methods. Appendix A: *Data Sets. Appendix B:* Data Sets on the Web. Appendix C:* Computer Programs. Index.
We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of … We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.
Now in widespread use, generalized additive models (GAMs) have evolved into a standard statistical methodology of considerable flexibility. While Hastie and Tibshirani's outstanding 1990 research monograph on GAMs is largely … Now in widespread use, generalized additive models (GAMs) have evolved into a standard statistical methodology of considerable flexibility. While Hastie and Tibshirani's outstanding 1990 research monograph on GAMs is largely responsible for this, there has been a long-standing need for an accessible introductory treatment of the subject that also e
Cet article présente une expérimentation menée auprès d’élèves de 9 à 10 ans. Trois activités les invitent à estimer combien de girafes, de gazelles et d’hirondelles pourraient tenir dans la … Cet article présente une expérimentation menée auprès d’élèves de 9 à 10 ans. Trois activités les invitent à estimer combien de girafes, de gazelles et d’hirondelles pourraient tenir dans la classe, en respectant de manière subjective un certain confort aux animaux. L’objectif est de rendre compte des réussites et difficultés d’élèves pour la première fois confrontés à ce type de situations.
In regression forecasting problems based on large-scale and noisy datasets, there is often a need to choose between classical machine learning algorithms and modern neural network methods. Classical methods are … In regression forecasting problems based on large-scale and noisy datasets, there is often a need to choose between classical machine learning algorithms and modern neural network methods. Classical methods are simpler and more interpretable, while neural networks are better at handling heterogeneous and high-dimensional data, although they require more resources and more difficult fine-tuning. This paper presents a comparative analysis of the Random Forest (RF), XGBoosting, and Dense Neural Network (DNN) regression models for processing large tabular datasets. In particular, the IMDb dataset from the Kaggle platform was analyzed. Special attention was focused on studying the possibility of improving the performance of the prediction by combining RF and XGBoosting ensemble methods with DNN models. It was found that the RF model demonstrated acceptable predictive quality, namely, a coefficient of determination (R²) was 0.8640. The XGBoosting-based model showed a considerably better result, with an R² of 0.9245. The basic DNN model was characterized by the R² value of 0.8990. After optimizing the hyperparameters of the DNN model, the R² increased to 0.9179. A hybrid approach has been proposed as an additional way to improve the effectiveness of the DNN model. In particular, the distributions of features according to their impact on the prediction accuracy determined by the RF and XGBoosting methods were used as weighting coefficients for the DNN model feature vector. As a result, the most accurate forecast was obtained. The coefficients of determination R² were 0.9283 and 0.9302 for the RF-DNN and XGBoosting-DNN hybrid models, respectively. The obtained results can be used to develop predictive models based on heterogeneous and high-dimensional tabular data
In today’s volatile market environment, supply chain management (SCM) must address complex challenges such as fluctuating demand, fraud, and delivery delays. This study applies machine learning techniques—Extreme Gradient Boosting (XGBoost) … In today’s volatile market environment, supply chain management (SCM) must address complex challenges such as fluctuating demand, fraud, and delivery delays. This study applies machine learning techniques—Extreme Gradient Boosting (XGBoost) and Recurrent Neural Networks (RNNs)—to optimize demand forecasting, inventory policies, and risk mitigation within a unified framework. XGBoost achieves high forecasting accuracy (MAE = 0.1571, MAPE = 0.48%), while RNNs excel at fraud detection and late delivery prediction (F1-score ≈ 98%). To evaluate models beyond accuracy, we introduce two novel metrics: Cost–Accuracy Efficiency (CAE) and CAE-ESG, which combine predictive performance with cost-efficiency and ESG alignment. These holistic measures support sustainable model selection aligned with the ISO 14001, GRI, and SASB benchmarks; they also demonstrate that, despite lower accuracy, Random Forest achieves the highest CAE-ESG score due to its low complexity and strong ESG profile. We also apply SHAP analysis to improve model interpretability and demonstrate business impact through enhanced Customer Lifetime Value (CLV) and reduced churn. This research offers a practical, interpretable, and sustainability-aware ML framework for supply chains, enabling more resilient, cost-effective, and responsible decision-making.
This study examines the application of Fourier transforms in enhancing financial forecasting models for the automotive industry. This study focuses on the top five global automakers (Toyota, Tesla, Ford, Volkswagen, … This study examines the application of Fourier transforms in enhancing financial forecasting models for the automotive industry. This study focuses on the top five global automakers (Toyota, Tesla, Ford, Volkswagen, and Mercedes-Benz) and analyses quarterly revenue data from 2015 to 2024. These 5 companies together account for approximately 20.6% of the global automotive industry's total turnover. As a result, their average quarterly sales may reflect some aspects of the financial situation of the automotive industry as a whole. So, this study will not only analyse their individual quarterly revenues, but will also focus on how they, as part of the industry as a whole, create this kind of value. ARIMA is used in this study to predict and analyze the quarterly revenue of these 5 companies in this study, as it is recognized that traditional ARIMA (1,1,1) models have the capacity to capture trends and short-term dynamics. However, they cannot frequently identify underlying and irregular seasonality. Fast Fourier Transform (FFT) is a method frequently used by researchers to detect hidden and irregular seasonal patterns, which can significantly improve forecasting accuracy. As a result, this study will propose the application of FFT to extract dominant cyclical patterns and integrate them into a Fourier-augmented ARIMA model.
Artificial intelligence and machine learning technologies have transformed supply chain management through the integration of predictive demand forecasting with prescriptive inventory optimization. Modern ML algorithms process diverse data streams—from historical … Artificial intelligence and machine learning technologies have transformed supply chain management through the integration of predictive demand forecasting with prescriptive inventory optimization. Modern ML algorithms process diverse data streams—from historical sales and promotions to external factors like weather patterns and market trends—to generate significantly more accurate demand predictions than conventional methods. Building on these forecasts, prescriptive analytics dynamically optimize inventory parameters across multi-echelon supply chains, simulating scenarios to balance service levels against holding costs. These integrated systems enable real-time automation of procurement decisions with continuous model refinement through feedback loops. Implementations across retail, manufacturing, and logistics sectors demonstrate substantial improvements in operational metrics, with various platforms offering distinctive capabilities for specific industry contexts. The evaluation of performance outcomes identifies key integration challenges with existing ERP ecosystems while highlighting operational resilience benefits in dynamic global markets. The transition toward autonomous supply chain management represents a fundamental advancement in operational capability that addresses contemporary volatility in global supply networks.
The advanced materials industry is a key driver of technological innovation, making accurate enterprise valuation essential for investment and market analysis. Traditional valuation methods like DCF, PE, and PB struggle … The advanced materials industry is a key driver of technological innovation, making accurate enterprise valuation essential for investment and market analysis. Traditional valuation methods like DCF, PE, and PB struggle with high-growth companies due to volatile cash flows and market dependencies. To address these challenges, this study applies a random forest algorithm to enhance valuation accuracy by leveraging financial data, market indicators, and industry-specific factors. By using bootstrap aggregation to randomly select samples and features, the random forest model, which is based on an ensemble learning approach with decision treesimproves predictive performance and minimizes overfitting. In order to quantify important valuation determinants, build a predictive model, and assess its performance using common error measures, this study gathers financial and market data from about 100 publicly traded businesses in the new materials sector. When compared to conventional techniques, the empirical study shows that the random forest model increases valuation accuracy and stability. The findings show that the model delivers a more accurate estimate of enterprise value, lessens sensitivity to market swings, and successfully captures nonlinear linkages in valuation.
In recent years, the prominence of probabilistic forecasting has risen among numerous research fields (finance, meteorology, banking, etc.). Best practices on using such forecasts are, however, neither well explained nor … In recent years, the prominence of probabilistic forecasting has risen among numerous research fields (finance, meteorology, banking, etc.). Best practices on using such forecasts are, however, neither well explained nor well understood. The question of the benefits derived from these forecasts is of primary interest, especially for the industrial sector. A sound methodology already exists to evaluate the value of probabilistic forecasts of binary events. In this paper, we introduce a comprehensive methodology for assessing the value of probabilistic forecasts of continuous variables, which is valid for a specific class of problems where the cost functions are piecewise linear. The proposed methodology is based on a set of visual diagnostic tools. In particular, we propose a new diagram called EVC (“Effective economic Value of a forecast of Continuous variable”) which provides the effective value of a forecast. Using simple case studies, we show that the value of probabilistic forecasts of continuous variables is strongly dependent on a key variable that we call the risk ratio. It leads to a quantitative metric of a value called the OEV (“Overall Effective Value”). The preliminary results suggest that typical OEVs demonstrate the benefits of probabilistic forecasting over a deterministic approach.
This paper focuses on enhancing demand forecasting accuracy for Fast-Moving Consumer Goods (FMCG) using innovative methodologies, specifically SupplySeers Time Series Models and the concept of permutation complexity. With the challenges … This paper focuses on enhancing demand forecasting accuracy for Fast-Moving Consumer Goods (FMCG) using innovative methodologies, specifically SupplySeers Time Series Models and the concept of permutation complexity. With the challenges of traditional forecasting methods, which often fail to account for the complexities of consumer behaviour and market volatility, this research seeks to provide a robust framework to improve predictive performance. The study adopts a quantitative and experimental research design, which includes phases of data preparation, exploratory data analysis, model development, and evaluation. Key findings indicate that SupplySeers models significantly outperform traditional methods such as ARIMA and Holt-Winters, particularly in capturing non-linear and seasonal trends typical in FMCG sales data. Additionally, permutation complexity serves as an effective metric for evaluating time series predictability, facilitating tailored model selection based on the complexity level of the data. A proposed hybrid forecasting model integrates SupplySeers Time Series Engine with permutation complexity filtering, allowing for dynamic adaptation to varying demand patterns. This approach not only enhances forecast accuracy by up to 25% compared to standalone models but also offers a scalable solution applicable to diverse FMCG datasets. The implications of this research are far-reaching, providing FMCG companies with the tools to optimize inventory management and enhance decision-making processes. The study concludes with actionable recommendations for stakeholders to adopt complexity-aware forecasting systems, ensuring better anticipation of demand fluctuations and improved market responsiveness.
The advent of full spectral flow cytometry has enabled the development of complex panels with over 35 colors, with the latest panels reaching 50 colors (1). This capability is made … The advent of full spectral flow cytometry has enabled the development of complex panels with over 35 colors, with the latest panels reaching 50 colors (1). This capability is made possible by cytometers equipped with numerous detectors beyond those in traditional cytometers and an expanded range of fluorochromes with emission peaks across the visible spectrum. However, our observations reveal significant challenges in the current unmixing, spread prediction, and panel design methodologies. Existing tools and guidelines, largely optimized for panels with up to 20+ colors, are limited in their ability to navigate this new ultra-high-color landscape. Without improvements in unmixing algorithms, predictive tools for spread, and design strategies, researchers risk creating suboptimal panels and obtaining inaccurate results. This article aims to highlight a range of emerging challenges associated with ultra-high parameter flow cytometry, particularly for practitioners accustomed to conventional panel design and analysis. As the field advances toward increasingly complex multiparameter experiments, novel issues have surfaced, many of which were previously unrecognized. Although this work does not provide comprehensive solutions to all of these observations, it underscores the need for continued methodological development. We anticipate that ongoing research by experts in the field will yield robust frameworks to address these challenges and advance best practices in high-dimensional cytometric analysis.
Forecasting outcomes of any system is essential for a better understanding and optimal management of the fluxes occurring in system operations. Machine Learning (ML) approaches can solve complex relationships among … Forecasting outcomes of any system is essential for a better understanding and optimal management of the fluxes occurring in system operations. Machine Learning (ML) approaches can solve complex relationships among collected data that are hard to describe using forecasting models. This paper aims to give an overview of many described prediction methodologies that use Artificial Neural Networks (ANN) and Support Vector Regression (SVR) under the diversity of the dataset and understand the performance of each method. To improve the forecasting performance, the author proposed depending on some performance indicators, such as The Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and coefficient of determination R2. Concludes that SVR generally outperforms ANN in forecasting of groundwater quality, drought indices, oil production, and illuminance prediction. The ANN shows better performance in certain scenarios, such as predicting wheat moisture content, solar energy, and monthly streamflow.
Jiale Du , Shuo Li , Fuguo Liu +1 more | International Journal of Financial Engineering
Driven by the need for more effective decision-making tools amid market volatility and ambiguity, the authors aim to develop a robust and detailed forecasting model that enhances the understanding of … Driven by the need for more effective decision-making tools amid market volatility and ambiguity, the authors aim to develop a robust and detailed forecasting model that enhances the understanding of market uncertainty and risk, providing investors with precise guidance for their decisions. To achieve this goal, we employ Bayesian functions to refine forward utility models, thereby improving market forecasting accuracy under uncertain conditions. This work contributes to the literature by introducing a more detailed method for estimating model parameters. The adaptability of this model allows for its application across various market scenarios, representing a significant advancement over traditional methods. In summary, this research deepens our understanding of market dynamics and equips investors with a more reliable tool for navigating uncertain financial landscapes.
Alexandra Platonova , В. С. Попов | Современные инновации системы и технологии - Modern Innovations Systems and Technologies
В статье представлен подробный сравнительный анализ точности прогнозирования временных рядов с использованием классических и современных методов машинного обучения. Исследуются четыре модели: ARIMA (модель авторегрессии интегрированного скользящего среднего), Prophet (библиотека для … В статье представлен подробный сравнительный анализ точности прогнозирования временных рядов с использованием классических и современных методов машинного обучения. Исследуются четыре модели: ARIMA (модель авторегрессии интегрированного скользящего среднего), Prophet (библиотека для прогнозирования с учетом трендов и сезонности), LSTM (долгая краткосрочная память) и GRU (управляемые рекуррентные блоки). Основное внимание уделяется оценке их эффективности в зависимости от объема данных и горизонта прогнозирования. Для сравнения применяются стандартные метрики качества: MAE, MSE, RMSE, MAPE, MASE и R2. Результаты показывают, что ARIMA и Prophet демонстрируют высокую устойчивость на малых выборках, тогда как GRU и LSTM превосходят их при работе с большими объемами данных. GRU превосходит все модели по точности для среднесрочных прогнозов, при этом имеет высокую производительность и меньшую вычислительную сложностью по сравнению с LSTM. На основе проведенного анализа даются практические рекомендации по выбору оптимальной модели в зависимости от специфики задачи, данных и требуемого горизонта прогнозирования. Исследование может быть полезно специалистам в области анализа данных, прогнозирования и машинного обучения.
Şule Eryürük | Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi
Tarım makineleri üretiminde karşılaşılan en önemli riskler günümüzde iklim değişikliğinden kaynaklı talep zamanlarında ve miktarlarında kayma ve rekabet unsurları olarak değerlendirilebilir. Bu nedenle tarım makineleri üretiminde istatistiksel talep tahmini yapmak … Tarım makineleri üretiminde karşılaşılan en önemli riskler günümüzde iklim değişikliğinden kaynaklı talep zamanlarında ve miktarlarında kayma ve rekabet unsurları olarak değerlendirilebilir. Bu nedenle tarım makineleri üretiminde istatistiksel talep tahmini yapmak elzem bir konu olmuştur. Bu çalışmanın amacı, tarım makineleri sektöründe bir imalatçıdan elde edilen 2011-2021 yılları arasındaki aylık üretim verilerinden yararlanarak gelecek 12 aylık dönemde üreticinin ürettiği en önemli iki ürününe ait üretim miktarlarını tahmin etmek ve gelecek üretim adetlerine dair öneriler geliştirmektir. Tarım makineleri üretimi adedi için gelecek tahmininde bulunurken mevsimselliği yansıtmak adına SARIMA diğer adıyla Box-Jenkins modeli kullanılmıştır. Zaman serilerini analiz etmek için geliştirilen yöntemlerden biri olan SARIMA yöntemi, tek değişkenli zaman serilerini analiz etmek için kullanılan güçlü metotlardandır. Araştırmada, belirlenen iki ürün için SARIMA modelleri arasında en iyi istatistiksel sonuçlar belirtilmiştir. Ürün 1 için SARIMA(1,1,2)(0,1,1) modeli, ürün 2 için ise SARIMA(0,1,1)(1,1,0) modeli en uygun model olarak bulunmuştur Seçilen iki ürün için belirlenen uygun SARIMA modeline bakarak, üretici için 2022 yılı 12 aylık üretim tahmini ve beklentisi hesaplanmıştır.
S. K. Agarwal , H. N. Suresh | International Journal For Multidisciplinary Research
In an increasingly volatile and demand-driven market landscape, accurate demand forecasting is critical for optimizing inventory levels, minimizing operational waste, enhancing customer satisfaction, and maintaining end-to-end supply chain agility. This … In an increasingly volatile and demand-driven market landscape, accurate demand forecasting is critical for optimizing inventory levels, minimizing operational waste, enhancing customer satisfaction, and maintaining end-to-end supply chain agility. This project proposes a robust and extensible demand forecasting pipeline that combines traditional statistical models with advanced machine learning approaches—specifically ARCH (Autoregressive Conditional Heteroskedasticity), GARCH (Generalized ARCH), Markov models, and Facebook Prophet—to capture diverse temporal patterns including volatility clustering, state transitions, trend, and seasonality. A key innovation lies in its data preprocessing strategy, where missing values are handled through multiple imputation methods such as forward-fill, backward-fill, median substitution, rolling averages, and statistical trimming. Each model is evaluated with these imputation techniques, and the most effective model-imputation pairing is selected based on a comprehensive set of performance metrics: Weighted Absolute Percentage Error (WAPE), Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), and the Coefficient of Determination (R²). The dataset, consisting of SKU-level order quantity time series, is split using a train-test framework where the last six months are reserved for testing to mimic real-world forecasting scenarios. This empirical, metric-driven approach enables the selection of the best-performing forecasting strategy, ensuring both accuracy and generalizability. The pipeline is designed to be modular, allowing for easy integration of additional models or imputation strategies, and is applicable across various domains including retail, manufacturing, and logistics. Future directions involve extending the pipeline to support real-time data ingestion, automated feature selection, and deep learning models such as LSTM and Transformer-based architectures for enhanced long-term forecasting accuracy.
This study investigates the application of Bayesian methods for estimating parameter of the Esscher transformed Laplace distribution, renowned for its ability to capture the asymmetry and heavy tails commonly observed … This study investigates the application of Bayesian methods for estimating parameter of the Esscher transformed Laplace distribution, renowned for its ability to capture the asymmetry and heavy tails commonly observed in financial data. This paper also focuses on the Bayesian estimation of stress strength parameter using both squared error and LINEX loss functions. A simulation study is conducted to compare the performance of the proposed Bayesian estimators for the unknown parameter and stress strength parameter with maximum likelihood estimator based on mean squared error. For the simulation study, we employ Gibbs sampling within the Metropolis-Hastings framework using Markov Chain Monte Carlo techniques to obtain Bayesian estimates. Utilizing R software and the Stan programming language, we analyze real-world stock price data and demonstrate how Bayesian estimators leverage advanced Markov Chain Monte Carlo techniques, particularly the No-U-Turn Sampler. The fit of the model is assessed using a Bayesian approach, utilizing information criteria such as Deviance Information Criterion, Watanabe-Akaike Information Criterion and leave-one-out cross-validation.
Blood supply management in hospitals is a critical and complex challenge due to the perishable nature of blood products, fluctuating demand, and the need to balance supply with patient needs. … Blood supply management in hospitals is a critical and complex challenge due to the perishable nature of blood products, fluctuating demand, and the need to balance supply with patient needs. Traditional forecasting methods often fail to capture dynamic factors such as seasonal variations, emergencies, and demographic changes, leading to inefficiencies such as shortages or wastage. This study proposes a machine learning-based approach to predict blood demand and optimize inventory levels in hospitals. By leveraging historical blood usage data along with external variables such as accident records and maternal health data, predictive models were developed to accurately forecast blood restocking requirements. An optimization framework was then integrated to guide inventory management decisions, aiming to minimize wastage and avoid stock outs. The combined prediction and optimization system offers a decision support tool that enhances operational efficiency, reduces costs, and improves patient care. Results demonstrate the potential of advanced data-driven techniques in transforming blood supply chain management, providing a foundation for more responsive and adaptive healthcare logistics.
Tarun baliyan | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
1. Introduction Stock market prediction is the intricate endeavor of determining the future value of a company's stock or other financial instruments traded on an exchange, with successful forecasts yielding … 1. Introduction Stock market prediction is the intricate endeavor of determining the future value of a company's stock or other financial instruments traded on an exchange, with successful forecasts yielding significant profit. This process holds paramount importance for a diverse array of stakeholders. Government officials, for instance, utilize economic forecasts to formulate fiscal and monetary policies, while business managers rely on them for strategic operational planning. For individual and institutional investors, accurate market prediction is crucial for informed strategic decision-making. The field of forecasting, however, is often described as a "flawed science" due to its inherent complexities, the multitude of influencing factors, and its susceptibility to subjective biases. Broadly, stock market forecasting employs three primary analytical approaches: fundamental analysis, technical analysis, and, increasingly, machine learning. This report will specifically delve into the effectiveness of technical analysis.
Yashwanth Boddu | World Journal of Advanced Engineering Technology and Sciences
Traditional forecasting methods face significant challenges when confronted with volatile market conditions and rapidly changing external factors. This article presents a comprehensive contextual AI system that integrates multimodal data streams … Traditional forecasting methods face significant challenges when confronted with volatile market conditions and rapidly changing external factors. This article presents a comprehensive contextual AI system that integrates multimodal data streams with temporal patterns to enhance prediction accuracy in dynamic environments. The system architecture employs a modular design comprising temporal modeling, context integration, dynamic calibration, and forecast synthesis components. By combining gradient-boosted trees, neural networks, and statistical methods with real-time contextual signals from social media, weather data, and operational metrics, the framework achieves substantial improvements in forecast accuracy. The implementation demonstrates effectiveness across retail demand prediction, energy consumption forecasting, and supply chain optimization domains. Through attention mechanisms and meta-learning strategies, the system dynamically adjusts the weighting of contextual factors based on market conditions, enabling rapid adaptation to regime changes while maintaining stability during normal operations. The framework addresses critical gaps between academic benchmarks and real-world applications by treating context as a dynamic component rather than static features. This advancement enables organizations to navigate uncertainty with greater confidence, reducing stockout incidents, improving inventory management, and enhancing operational decision-making across diverse industries.
The integration of Artificial Intelligence (AI) in supply chain functions has revolutionized traditional business operations. This paper explores the transformative impact of AI technologies on forecasting accuracy and inventory management … The integration of Artificial Intelligence (AI) in supply chain functions has revolutionized traditional business operations. This paper explores the transformative impact of AI technologies on forecasting accuracy and inventory management efficiency, focusing on how AI-driven tools optimize stock levels, reduce operational costs, and improve demand prediction. Using a mixed-methods approach—comprising literature review, case study insights, and statistical analysis—the study finds that AI significantly enhances forecasting precision, supports real-time inventory visibility, and facilitates proactive replenishment strategies. As global markets face growing volatility and uncertainty, AI emerges as a critical enabler of agile and resilient inventory systems. This paper also identifies implementation barriers and provides recommendations for effective AI adoption in inventory operations.
For an investor, it is very important to conduct a stock valuation before making a transaction because stock valuation is a form of rationalization for an investor in making investment … For an investor, it is very important to conduct a stock valuation before making a transaction because stock valuation is a form of rationalization for an investor in making investment decisions. The purpose of this study itself is to evaluate the intrinsic value of PT Indofood Sukses Makmur Tbk (INDF) shares using two approaches, namely fundamental analysis and technical analysis. In fundamental analysis, economic, industry, and company analyses are carried out. The fundamental analysis methods used are the dividend discount model (DDM) and price-earnings ratio (PER), while technical analysis is carried out using moving average indicators and the relative strength index. The results of this study indicate a relatively strong trend in the Indonesian economy and a fairly strong resilience in the face of trade wars. In terms of industry, INDF continues to experience growth both in terms of sales and in terms of net profit. And the results of calculations using the DDM and PER methods show that the INDF share price has an intrinsic value that is much higher than the value of shares in the market, so it can be concluded that INDF shares are classified as undervalued. And in terms of technical analysis, the intrinsic value of INDF shares is in the price range of IDR 6,500 to IDR 7,250. From this analysis, it indicates that the current stock value in the market is low and still has the potential to be purchased because of its higher intrinsic value.
Abstract We present what appears to be the first Bayesian procedure to analyze the seasonal variation of rates from data that generally have small amplitude and small sample size, with … Abstract We present what appears to be the first Bayesian procedure to analyze the seasonal variation of rates from data that generally have small amplitude and small sample size, with a possibly variable population at risk. Such data are pertinent both to the medical and the social sciences, for example. Our procedure is easy to apply and versatile, because it can be used to assess various usual patterns of seasonal variation. To illustrate the application of our procedure, we provide three examples with different seasonal patterns. Based on real data, the examples are enhanced by tables and graphics that elucidate the procedure’s application.