Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (5)

This paper presents a case study of the operational management of the Robinvale high-pressure piped irrigation water delivery system (RVHPS) in Australia. Based on datasets available, improved pump setpoint selection 
 This paper presents a case study of the operational management of the Robinvale high-pressure piped irrigation water delivery system (RVHPS) in Australia. Based on datasets available, improved pump setpoint selection using a calibrated hydraulic model is investigated. The first step was to implement pre-processing of measured flow and pressure data to identify errors in the data and possible faulty sensors. An EPANET hydraulic simulation model was updated with calibrated pipe roughness height values by using the processed pressure and flow data. Then, new pump setpoints were selected using the calibrated model given the actual measured demands such that the pressures in the network were minimized subject to required customer service standards. Based on a two-day simulation, it was estimated that 4.7% savings in pumping energy cost as well as 4.7% reduction in greenhouse gas emissions can be achieved by applying the new pump setpoints.
This paper presents a case study of the operational management of the Robinvale high-pressure piped irrigation water delivery system (RVHPS) in Australia. Based on datasets available, improved pump setpoint selection 
 This paper presents a case study of the operational management of the Robinvale high-pressure piped irrigation water delivery system (RVHPS) in Australia. Based on datasets available, improved pump setpoint selection using a calibrated hydraulic model is investigated. The first step was to implement pre-processing of measured flow and pressure data to identify errors in the data and possible faulty sensors. An EPANET hydraulic simulation model was updated with calibrated pipe roughness height values by using the processed pressure and flow data. Then, new pump setpoints were selected using the calibrated model given the actual measured demands such that the pressures in the network were minimized subject to required customer service standards. Based on a two-day simulation, it was estimated that 4.7% savings in pumping energy cost as well as 4.7% reduction in greenhouse gas emissions can be achieved by applying the new pump setpoints.
Optimizing pump operations is a challenging task for real-time management of water distribution systems (WDS). With suitable pump scheduling, pumping costs can be significantly reduced. In this research, a novel 
 Optimizing pump operations is a challenging task for real-time management of water distribution systems (WDS). With suitable pump scheduling, pumping costs can be significantly reduced. In this research, a novel economic model predictive control (EMPC) framework for real-time management of WDS is proposed. Optimal pump operations are selected based on predicted system behavior over a receding time horizon with the aim to minimize the total pumping energy cost. Time-varying electricity tariffs are considered while all the required water demands are satisfied. The novelty of this framework is to choose the number of pumps to operate in each pump station as decision variables in order to optimize the total pumping energy costs. By using integer programming, the proposed EMPC is applied to a benchmark case study, the Richmond Pruned network. The simulation with an EPANET hydraulic simulator is implemented. Moreover, a comparison of the results obtained using the proposed EMPC with those obtained using trigger-level control demonstrates significant economic benefits of the proposed EMPC.

Commonly Cited References

Ensembles of forecasts are obtained from multiple runs of numerical weather forecasting models with different initial conditions and typically employed to account for forecast uncertainties. However, biases and dispersion errors 
 Ensembles of forecasts are obtained from multiple runs of numerical weather forecasting models with different initial conditions and typically employed to account for forecast uncertainties. However, biases and dispersion errors often occur in forecast ensembles: they are usually underdispersive and uncalibrated and require statistical post‐processing. We present an Ensemble Model Output Statistics (EMOS) method for calibration of wind‐speed forecasts based on the log‐normal (LN) distribution and we also show a regime‐switching extension of the model, which combines the previously studied truncated normal (TN) distribution with the LN. Both models are applied to wind‐speed forecasts of the eight‐member University of Washington mesoscale ensemble, the 50 member European Centre for Medium‐Range Weather Forecasts (ECMWF) ensemble and the 11 member Aire LimitĂ©e Adaptation dynamique DĂ©veloppement International‐Hungary Ensemble Prediction System (ALADIN‐HUNEPS) ensemble of the Hungarian Meteorological Service; their predictive performance is compared with that of the TN and general extreme value (GEV) distribution based EMOS methods and the TN–GEV mixture model. The results indicate improved calibration of probabilistic forecasts and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. Further, the TN–LN mixture model outperforms the traditional TN method and its predictive performance is able to keep up with models utilizing the GEV distribution without assigning mass to negative values.
Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual 
 Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual scores are averaged over forecast cases, to result in a summary measure of the predictive performance, such as the mean absolute error or the mean squared error. I demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched. Effective point forecasting requires that the scoring function be specified ex ante, or that the forecaster receives a directive in the form of a statistical functional, such as the mean or a quantile of the predictive distribution. If the scoring function is specified ex ante, the forecaster can issue the optimal point forecast, namely, the Bayes rule. If the forecaster receives a directive in the form of a functional, it is critical that the scoring function be consistent for it, in the sense that the expected score is minimized when following the directive. A functional is elicitable if there exists a scoring function that is strictly consistent for it. Expectations, ratios of expectations and quantiles are elicitable. For example, a scoring function is consistent for the mean functional if and only if it is a Bregman function. It is consistent for a quantile if and only if it is generalized piecewise linear. Similar characterizations apply to ratios of expectations and to expectiles. Weighted scoring functions are consistent for functionals that adapt to the weighting in peculiar ways. Not all functionals are elicitable; for instance, conditional value-at-risk is not, despite its popularity in quantitative finance.
Abstract. Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the 
 Abstract. Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Abstract We propose a method for post‐processing an ensemble of multivariate forecasts in order to obtain a joint predictive distribution of weather. Our method utilizes existing univariate post‐processing techniques, in 
 Abstract We propose a method for post‐processing an ensemble of multivariate forecasts in order to obtain a joint predictive distribution of weather. Our method utilizes existing univariate post‐processing techniques, in this case ensemble Bayesian model averaging (BMA), to obtain estimated marginal distributions. However, implementing these methods individually offers no information regarding the joint distribution. To correct this, we propose the use of a Gaussian copula, which offers a simple procedure for recovering the dependence that is lost in the estimation of the ensemble BMA marginals. Our method is applied to 48 h forecasts of a set of five weather quantities using the eight‐member University of Washington mesoscale ensemble. We show that our method recovers many well‐understood dependencies between weather quantities and subsequently improves calibration and sharpness over both the raw ensemble and a method which does not incorporate joint distributional information. Copyright © 2012 Royal Meteorological Society
In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events. However, the restriction of conventional forecast evaluation methods to subsets 
 In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events. However, the restriction of conventional forecast evaluation methods to subsets of extreme observations has unexpected and undesired effects, and is bound to discredit skillful forecasts when the signal-to-noise ratio in the data generating process is low. Conditioning on outcomes is incompatible with the theoretical assumptions of established forecast evaluation methods, thereby confronting forecasters with what we refer to as the forecaster's dilemma. For probabilistic forecasts, proper weighted scoring rules have been proposed as decision-theoretically justifiable alternatives for forecast evaluation with an emphasis on extreme events. Using theoretical arguments, simulation experiments and a real data study on probabilistic forecasts of U.S. inflation and gross domestic product (GDP) growth, we illustrate and discuss the forecaster's dilemma along with potential remedies.
Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations 
 Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the German Meteorological Service. It yields calibrated and sharp predictive distributions and compares favourably with extended logistic regression and Bayesian model averaging which are state of the art approaches for precipitation post-processing. The incorporation of neighbourhood information further improves predictive performance and turns out to be a useful strategy to account for displacement errors of the dynamical forecasts in a probabilistic forecasting framework.