Algorithm 659

Type: Article
Publication Date: 1988-03-01
Citations: 820
DOI: https://doi.org/10.1145/42288.214372

Abstract

We compare empirically accuracy and speed of low-discrepancy sequence generators of Sobol' and Faure. These generators are useful for multidimensional integration and global optimization. We discuss our implementation of the Sobol' generator.

Locations

  • ACM Transactions on Mathematical Software

Ask a Question About This Paper

Summary

Login to see paper summary

None

2025-05-13

None

2022-06-15

None

2025-05-13

None

2025-04-30

None

2025-05-20

None

2025-04-30

None

2024-06-01

None

2025-05-21

None

2025-02-28

None

2022-12-03

None

2025-05-20

None

2025-05-01

None

2025-05-13

None

2014-07-02

None

2025-05-16

None

2025-05-15

None

2024-08-12

None

2023-06-27

None

2011-08-01
Abstract Sobol’ sequences are widely used for quasi-Monte Carlo methods that arise in financial applications. Sobol’ sequences have parameter values called direction numbers, which are freely chosen by the user, … Abstract Sobol’ sequences are widely used for quasi-Monte Carlo methods that arise in financial applications. Sobol’ sequences have parameter values called direction numbers, which are freely chosen by the user, so there are several implementations of Sobol’ sequence generators. The aim of this paper is to provide a comparative study of (non-commercial) high-dimensional Sobol’ sequences by calculating financial models. Additionally, we implement the Niederreiter sequence (in base 2) with a slight modification, that is, we reorder the rows of the generating matrices, and analyze and compare it with the Sobol’ sequences.
Abstract We describe an implementation of the de-biased estimator using mixed sequences; these are sequences obtained from pseudorandom and low-discrepancy sequences. We use this implementation to numerically solve some stochastic … Abstract We describe an implementation of the de-biased estimator using mixed sequences; these are sequences obtained from pseudorandom and low-discrepancy sequences. We use this implementation to numerically solve some stochastic differential equations from computational finance. The mixed sequences, when combined with Brownian bridge or principal component analysis constructions, offer convergence rates significantly better than the Monte Carlo implementation.
Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which … Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which become intractable and hard to approximate in the presence of negative correlations, DPPs offer efficient and exact algorithms for sampling, marginalization, conditioning, and other inference tasks. While they have been studied extensively by mathematicians, giving rise to a deep and beautiful theory, DPPs are relatively new in machine learning. Determinantal Point Processes for Machine Learning provides a comprehensible introduction to DPPs, focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how DPPs can be applied to real-world applications like finding diverse sets of high-quality search results, building informative summaries by selecting diverse sentences from documents, modeling non-overlapping human poses in images or video, and automatically building timelines of important news stories. It presents the general mathematical background to DPPs along with a range of modeling extensions, efficient algorithms, and theoretical results that aim to enable practical modeling and learning.
The overall performance of the quasi–Monte Carlo (QMC) sequences proposed by Halton and Faure, as well as their scrambled versions, are numerically compared against each other and against the Latin … The overall performance of the quasi–Monte Carlo (QMC) sequences proposed by Halton and Faure, as well as their scrambled versions, are numerically compared against each other and against the Latin hypercube sampling sequence in the context of the simulated likelihood estimation of a mixed multinomial logit model of choice. In addition, the efficiency of the QMC sequences generated with and without scrambling is compared across observations, and the performance of the Box–Muller and inverse normal transform procedures is tested. Numerical experiments were performed in five dimensions with 25, 125, and 625 draws and in 10 dimensions with 100 draws. Results indicate that the Faure sequence consistently outperforms the Halton sequence and that the scrambled versions of the Faure sequence perform best overall.
This study performs parameter inference in a partial differential equations system of pulmonary circulation. We use a fluid dynamics network model that takes selected parameter values and mimics the behaviour … This study performs parameter inference in a partial differential equations system of pulmonary circulation. We use a fluid dynamics network model that takes selected parameter values and mimics the behaviour of the pulmonary haemodynamics under normal physiological and pathological conditions. This is of medical interest as it enables tracking the progression of pulmonary hypertension. We show how we make the fluids model tractable by reducing the parameter dimension from a 55D to a 5D problem. The Delayed Rejection Adaptive Metropolis algorithm, coupled with constraint non‐linear optimization, is successfully used to learn the parameter values and quantify the uncertainty in the parameter estimates. To accommodate for different magnitudes of the parameter values, we introduce an improved parameter scaling technique in the Delayed Rejection Adaptive Metropolis algorithm. Formal convergence diagnostics are employed to check for convergence of the Markov chains. Additionally, we perform model selection using different information criteria, including Watanabe Akaike Information Criteria.
The computation of Gaussian orthant probabilities has been extensively studied for low-dimensional vectors. Here, we focus on the high-dimensional case and we present a two-step procedure relying on both deterministic … The computation of Gaussian orthant probabilities has been extensively studied for low-dimensional vectors. Here, we focus on the high-dimensional case and we present a two-step procedure relying on both deterministic and stochastic techniques. The proposed estimator relies indeed on splitting the probability into a low-dimensional term and a remainder. While the low-dimensional probability can be estimated by fast and accurate quadrature, the remainder requires Monte Carlo sampling. We further refine the estimation by using a novel asymmetric nested Monte Carlo (anMC) algorithm for the remainder and we highlight cases where this approximation brings substantial efficiency gains. The proposed methods are compared against state-of-the-art techniques in a numerical study, which also calls attention to the advantages and drawbacks of the procedure. Finally, the proposed method is applied to derive conservative estimates of excursion sets of expensive to evaluate deterministic functions under a Gaussian random field prior, without requiring a Markov assumption. Supplementary material for this article is available online.
The Monte-Carlo (MC) technique is a well-known solution for statistical analysis. In contrast to probabilistic (non-Monte Carlo) statistical static timing analysis (SSTA) techniques, which are typically derived from simple statistical … The Monte-Carlo (MC) technique is a well-known solution for statistical analysis. In contrast to probabilistic (non-Monte Carlo) statistical static timing analysis (SSTA) techniques, which are typically derived from simple statistical or timing models, the MC-based SSTA technique encompasses complicated timing and process variation models. However, a precise analysis that involves a traditional MC-based technique requires many timing simulation runs (1000s). In this paper, the behavior of the critical delay of digital circuits is investigated by using a Legendre polynomial-based ANOVA decomposition. The analysis verifies that the variance of the critical delay is mainly due to the pairwise interactions among the principal components (PCs) of the process parameters. Based on this fact, recent progress on the MC-based SSTA, through Latin hypercube sampling (LHS), is also studied. It is shown that this technique is prone to inefficient critical delay variance and quantile estimating. Inspired by the decomposition observations, an efficient algorithm is proposed which produces optimally low L2-discrepancy quasi-MC (QMC) samples which significantly improve the precision of critical delay statistical estimations, compared with that of the MC, LHS, and traditional QMC techniques.
There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have … There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have singularities when the domain of the distribution is unbounded and it can be very expensive or even impossible to sample points from a general multivariate distribution. We show that importance sampling is a simple method to overcome both problems. (author's abstract)
Random scrambling of deterministic ( t , m , s )-nets and ( t , s )-sequences eliminates their inherent bias while retaining their low-discrepancy properties. This article describes an … Random scrambling of deterministic ( t , m , s )-nets and ( t , s )-sequences eliminates their inherent bias while retaining their low-discrepancy properties. This article describes an implementation of two types of random scrambling, one proposed by Owen and another proposed by Faure and Tezuka. The four different constructions of digital sequences implemented are those proposed by Sobol', Faure, Niederreiter, and Niederreiter and Xing. Because the random scrambling involves manipulating all digits of each point, the code must be written carefully to minimize the execution time. Computed root mean square discrepancies of the scrambled sequences are compared to known theoretical results. Furthermore, the performances of these sequences on various test problems are discussed.
This paper aims at establishing the cumulative distribution function (CDF) of the output variable of the probabilistic optimal power flow. In the context of the probability weighted moment (PWM), the … This paper aims at establishing the cumulative distribution function (CDF) of the output variable of the probabilistic optimal power flow. In the context of the probability weighted moment (PWM), the uncertainties in the power system are modelled by a ninth-order polynomial normal transformation (NPNT) technique, whereby the dependencies among the inputs are conveniently handled. The quasi-Monte Carlo simulation (MCS) is employed to get the statistical information of the outputs. Based on the PWMs of the output variable, the CDF is reconstructed by NPNT technique. Testing on a modified 118-bus system, results from the proposed method are compared against those from MCS. The proposed method demonstrates a high level of accuracy for the mean, standard deviation and CDF, while significantly reducing the computational burden.
Quasi-Monte Carlo sequences have been shown to provide accurate option price approximations for a variety of options. In this paper, we apply quasi-Monte Carlo sequences in a duality approach to … Quasi-Monte Carlo sequences have been shown to provide accurate option price approximations for a variety of options. In this paper, we apply quasi-Monte Carlo sequences in a duality approach to value American options. We compare the results using different low discrepancy sequences and estimate error bounds and computational effort. The results demonstrate the value of sequences using expansions of irrationals.
The Voronoi tessellation is the partition of space for a given seeds pattern and the result of the partition depends completely on the type of given pattern "random", Poisson-Voronoi tessellations … The Voronoi tessellation is the partition of space for a given seeds pattern and the result of the partition depends completely on the type of given pattern "random", Poisson-Voronoi tessellations (PVT), or "non-random", Non Poisson-Voronoi tessellations. In this note we shall consider properties of Voronoi tessellations with centers generated by Sobol quasi random sequences which produce a more ordered disposition of the centers with respect to the PVT case. A probability density function for volumes of these Sobol Voronoi tessellations (SVT) will be proposed and compared with results of numerical simulations. An application will be presented concerning the local structure of gas ($CO_2$) in the liquid-gas coexistence phase. Furthermore a probability distribution will be computed for the length of chords resulting from the intersections of random lines with a three-dimensional SVT. The agreement of the analytical formula with the results from a computer simulation will be also investigated. Finally a new type of Voronoi tessellation based on adjustable positions of seeds has been introduced which generalizes both PVT and SVT cases.
In this study, we present the development of an advanced air pollution modeling approach, which incorporates cutting-edge stochastic techniques for large-scale simulations of long-range air pollutant transportation. The Unified Danish … In this study, we present the development of an advanced air pollution modeling approach, which incorporates cutting-edge stochastic techniques for large-scale simulations of long-range air pollutant transportation. The Unified Danish Eulerian Model (UNI-DEM) serves as a crucial mathematical framework with numerous applications in studies concerning the detrimental effects of heightened air pollution levels. We employ the UNI-DEM model in our research to obtain trustworthy insights into critical questions pertaining to environmental preservation. Our proposed methodology is a highly convergent quasi-Monte Carlo technique that relies on a unique symmetrization lattice rule. By fusing the concepts of special functions and optimal generating vectors, we create a novel algorithm grounded in the component-by-component construction method, which has been recently introduced. This amalgamation yields particularly impressive outcomes for lower-dimensional cases, substantially enhancing the performance of the most advanced existing methods for calculating the Sobol sensitivity indices of the UNI-DEM model. This improvement is vital, as these indices form an essential component of the digital ecosystem for environmental analysis.
The probabilistic characteristics of daily wind speed are not well captured by simple density functions such as Normal or Weibull distribuions as suggested by the existing literature. The unmodeled uncertainties … The probabilistic characteristics of daily wind speed are not well captured by simple density functions such as Normal or Weibull distribuions as suggested by the existing literature. The unmodeled uncertainties can cause unknown influences on the power system operation. In this paper, we develop a new stochastic scheme for the probabilistic optimal power flow (POPF) problem, which can cope with arbitrarily complex wind speed distributions and also take into account the correlation of different wind farms. A multivariate Gaussian mixture model (GMM) is employed to approximate actual wind speed distributions from multiple wind farms. Furthermore, we propose to adopt the Markov Chain Monte Carlo (MCMC) sampling technique to deliver wind speed samples as the input of POPF. We also novelly integrate a Sobol-based quasi-Monte Carlo (QMC) technique into the MCMC sampling process to obtain a faster convergence rate. The IEEE 14- and 118-bus benchmark systems with additional wind farms are used to examine the effectiveness of the proposed POPF scheme.
Abstract Risk and policy analysis involves consideration of uncertainties. A quantitative probabilistic method for uncertainty analysis includes: (1) quantifying and assigning probabilistic distributions to the input uncertainties, (2) sampling the … Abstract Risk and policy analysis involves consideration of uncertainties. A quantitative probabilistic method for uncertainty analysis includes: (1) quantifying and assigning probabilistic distributions to the input uncertainties, (2) sampling the distributions of these uncertain parameters in an iterative fashion using Monte Carlo methods, (3) propagating the effects of uncertainties through the model, and (4) predicting the outcomes in terms of probabilistic measures like mean, variance, and fractiles. However, the results of the probabilistic analysis depend on the number of samples chosen. The sample size required for a particular analysis depends on various factors such as type of model, the random number generator used, type of distributions, and the output probabilistic measure and cannot be universally defined. The general tendency is to reduce the samples as much as possible without realizing the effect on decisions. For example, the mean of the output requires a number of samples that is an order of magnitude less compared to the variance. Therefore, it is desirable to use a sampling technique that can predict the output probabilistic measure accurately with the minimum number of samples. In this work, we present new sampling techniques based on Quasi‐Monte Carlo sequences and Latin hypercube sampling. © 2004 American Institute of Chemical Engineers Environ Prog, 23: 141–157, 2004
Problems in computational finance share many of the characteristics that challenge us in statistical circuit analysis: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample simulation. We offer a … Problems in computational finance share many of the characteristics that challenge us in statistical circuit analysis: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample simulation. We offer a detailed experimental study of how one celebrated technique from this domain - quasi-Monte Carlo (QMC) analysis - can be used for fast statistical circuit analysis. In contrast with traditional pseudo-random Monte Carlo sampling, QMC substitutes a (shorter) sequence of deterministically chosen sample points. Across a set of digital and analog circuits, in 90nm and 45nm technologies, varying in size from 30 to 400 devices, we obtain speedups in parametric yield estimation from 2times to 50times
Sensitivity analysis of model output is relevant to a number of practices, including verification of models and computer code quality assurance. It deals with the identification of influential model parameters, … Sensitivity analysis of model output is relevant to a number of practices, including verification of models and computer code quality assurance. It deals with the identification of influential model parameters, especially in complex models implemented in computer programs with many uncertain input variables. In a recent article a new method for sensitivity analysis, named HIM* based on a rank transformation of the uncertainty importance measure suggested by Hora and Iman was proved very powerful for performing automated sensitivity analysis of model output, even in presence of model non-monotonicity. The same was not true of other widely used non-parametric techniques such as standardized rank regression coefficients. A drawback of the HIM* method was the large dimension of the stochastic sample needed for its estimation, which made HIM* impracticable for systems with large number of uncertain parameters. In the present note a more effective sampling algorithm, based on Sobol's quasirandom generator is coupled with HIM*, thereby greatly reducing the sample size needed for an effective identification of influential variables. The performances of the new technique are investigated for two different benchmarks.
Environmental security is among the top priorities worldwide, and there are many difficulties in this area. The reason for this is a painful subject for society and healthcare systems. Multidimensional … Environmental security is among the top priorities worldwide, and there are many difficulties in this area. The reason for this is a painful subject for society and healthcare systems. Multidimensional sensitivity analysis is fundamental in the process of validating the accuracy and reliability of large-scale computational models of air pollution. In this paper, we present an improved version of the well-known Sobol sequence, which shows a significant improvement over the best available existing sequences in the measurement of the sensitivity indices of the digital ecosystem under consideration. We performed a complicated comparison with the best available low-discrepancy sequences for multidimensional sensitivity analysis to study the model’s output with respect to variations in the input emissions of anthropogenic pollutants and to evaluate the rates of several chemical reactions. Our results, which are presented in this paper through a sensitivity analysis, will play an extremely important multi-sided role.
Thorough examination of various aspects related to the distribution of air pollutants in a specific region and the factors contributing to high concentrations is essential, as these elevated levels can … Thorough examination of various aspects related to the distribution of air pollutants in a specific region and the factors contributing to high concentrations is essential, as these elevated levels can be detrimental. To accomplish this, the development and improvement of a digital twin that encompasses all relevant physical processes in the atmosphere is necessary. This tool, known as DIGITAL AIR, has been created, and it is now necessary to extend it with precise sensitivity analysis. DIGITAL AIR is gaining popularity due to its effectiveness in addressing complex problems that arise in intricate environments; this motivates our further investigations. In this paper, we focus on the preparation and further investigation of DIGITAL AIR through sensitivity analysis with improved stochastic approaches for investigating high-level air pollutants. We discuss and test the utilization of this digital tool in tackling the issue. The unified Danish Eulerian model (UNI-DEM) plays a crucial role within DIGITAL AIR. This mathematical model, UNI-DEM, is highly versatile and can be applied to various studies concerning the adverse effects caused by elevated air pollution levels.
Abstract Randomized quasi-Monte Carlo (RQMC) method is presented to compute the problem of a barrier option pricing. It is assumed that stock prices are modeled with a fractional Brownian motion … Abstract Randomized quasi-Monte Carlo (RQMC) method is presented to compute the problem of a barrier option pricing. It is assumed that stock prices are modeled with a fractional Brownian motion (FBM). The FBM is a Gaussian process with dependent and stationary increments except H = ½. The FBM can model stock prices with short or long memory. We propose a trajectory generation technique based on fast Fourier transforms to simulate stock prices modeled by FBM. A stock price trajectory is utilized to predict pricing of barrier options. Barrier options are options whose payoff function depend on the stock prices during the option’s lifetime. Using the results of the stock price trajectory and RQMC method can be determined the price of a barrier option under FBM. We conclude that RQMC is an efficient technique for calculating the price of barrier options rather than a standard Monte Carlo (MC).
This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where … This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst-case error as s increases. A crucial role is played by parameters called ‘weights’, since a careful use of the weight parameters is needed to ensure that the worst-case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy, all of which are discussed with an appropriate level of detail.
Wind farms commonly cluster in regions rich in wind resources. Thus, correlation of wind speeds from different wind farms should not be ignored when modeling a power system with large … Wind farms commonly cluster in regions rich in wind resources. Thus, correlation of wind speeds from different wind farms should not be ignored when modeling a power system with large wind energy penetration. This paper proposes a probabilistic optimal power flow (POPF) technique based on the quasi-Monte Carlo simulation (QMCS) considering the correlation of wind speeds using copula functions. In this paper, a copula function is used to model the dependent structure of random wind speeds and their forecast errors. QMCS is employed in the sampling procedure to reduce computation burden. The proposed method is applied in probabilistic power flow (PPF). Furthermore, the PPF is used in the POPF problem that aims at minimizing the expectation and downside risk of fuel cost simultaneously. Simulation studies are conducted on a modified IEEE 118-bus power system with wind farms integrated in two areas, and the results show that the accuracy and efficiency are improved by the proposed method.
article Free AccessArtifacts Evaluated & ReusableArtifacts Available Share on Algorithm 647: Implementation and Relative Efficiency of Quasirandom Sequence Generators Author: Bennett L. Fox Computer Science Department, University of Montreal, P.O. … article Free AccessArtifacts Evaluated & ReusableArtifacts Available Share on Algorithm 647: Implementation and Relative Efficiency of Quasirandom Sequence Generators Author: Bennett L. Fox Computer Science Department, University of Montreal, P.O. Box 6128, Station A, Montreal, Quebec, Canada H3C 3J7 Computer Science Department, University of Montreal, P.O. Box 6128, Station A, Montreal, Quebec, Canada H3C 3J7View Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 12Issue 4pp 362–376https://doi.org/10.1145/22721.356187Published:01 December 1986Publication History 117citation1,807DownloadsMetricsTotal Citations117Total Downloads1,807Last 12 Months78Last 6 weeks10 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF