On the distribution of points in a cube and the approximate evaluation of integrals

Authors

Type: Article
Publication Date: 1967-01-01
Citations: 2553
DOI: https://doi.org/10.1016/0041-5553(67)90144-9

Locations

  • USSR Computational Mathematics and Mathematical Physics
Journal Article ON IRREGULARITIES OF DISTRIBUTION AND APPROXIMATE EVALUATION OF CERTAIN FUNCTIONS Get access W. W. L. CHEN W. W. L. CHEN Imperial CollegeLondon SW7 Search for other works by … Journal Article ON IRREGULARITIES OF DISTRIBUTION AND APPROXIMATE EVALUATION OF CERTAIN FUNCTIONS Get access W. W. L. CHEN W. W. L. CHEN Imperial CollegeLondon SW7 Search for other works by this author on: Oxford Academic Google Scholar The Quarterly Journal of Mathematics, Volume 36, Issue 2, June 1985, Pages 173–182, https://doi.org/10.1093/qmath/36.2.173 Published: 01 June 1985 Article history Received: 13 February 1984 Published: 01 June 1985
MONODROMY GROUPS OF BOUNDARY SINGULARITIES G G Il'yuta ON THE INVERTIBILITY OF ALMOST PERIODIC OPERATORS V G Kurbatov Singularities of systems of rays Vladimir I Arnol'd THE TOPOLOGY OF INTEGRAL … MONODROMY GROUPS OF BOUNDARY SINGULARITIES G G Il'yuta ON THE INVERTIBILITY OF ALMOST PERIODIC OPERATORS V G Kurbatov Singularities of systems of rays Vladimir I Arnol'd THE TOPOLOGY OF INTEGRAL SUBMANIFOLDS OF COMPLETELY INTEGRABLE HAMILTONIAN SYSTEMS A V Brailov and A T Fomenko Joint meetings of the Petrovskii Seminar on differential equations and mathematical problems of physics, and The Moscow Mathematical Society (twelfth meeting, 18-21 January 1989) O A Oleinik STRUCTURE OF THE SET OF SUMS OF A CONDITIONALLY CONVERGENT SERIES IN A NORMED SPACE A hobanyan Asymptotics of solutions of a class of non-linear elliptic equations in a cylindrical domain F A Chechkin On the number of integral points in a domain
The result of V.A. Bykovsky and M.D. Monina on the distribution of integer points on the three-dimensional sphere $ a_1^2 + a_2^2 + a_3^2 + a_4^2 = d $ with … The result of V.A. Bykovsky and M.D. Monina on the distribution of integer points on the three-dimensional sphere $ a_1^2 + a_2^2 + a_3^2 + a_4^2 = d $ with odd d is extended to the case of even d.
Let U =[0,1]. Suppose that g is a Lebesgue-integrable function, not necessarily bounded, in U 2, and that h is any function in U 2. Let P = P(N) be … Let U =[0,1]. Suppose that g is a Lebesgue-integrable function, not necessarily bounded, in U 2, and that h is any function in U 2. Let P = P(N) be a distribution of N points in U 2 such that h(y) is finite for every y ∈ P. For x = (x1,x2) in U 2, let B(x) denote the rectangle consisting of all y = (y1,y2) in U 2 satisfying 0 < yl < x1 and 0 < y2 < x2, and write 1 $${\rm{Z}}\left[ {P;{\rm{h}}:B({\rm{x}})} \right] = \sum\limits_{y{\kern 1pt} \in {\kern 1pt} P{\kern 1pt} \cap B({\rm{x}})} {{\rm{h(y}}).}$$ Let μ denote the Lebesgue measure in U2, and write 2 $${\rm{D}}\left[ {P;{\rm{h;g}};B({\rm{x}})} \right] = {\rm{Z}}\left[ {P;{\rm{h}};B({\rm{x}})} \right] - {\rm{N}}\int\limits_{B({\rm{x}})} {{\rm{g}}({\rm{y}}){\rm{d}}\mu } .$$
A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content. A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
ABSTRACT Stochastic approximation algorithms for least square error approximation to density and distribution functions are considered. The main results are necessary and sufficient parameter conditions for the convergence of the … ABSTRACT Stochastic approximation algorithms for least square error approximation to density and distribution functions are considered. The main results are necessary and sufficient parameter conditions for the convergence of the approximation processes and a generalization to some time‐dependent density and distribution functions.
For a population according to a uniform distribution where is known, Thomas Yageen (1996) derived an efficient estimator for . Here for the case when both and 'are unknown, we … For a population according to a uniform distribution where is known, Thomas Yageen (1996) derived an efficient estimator for . Here for the case when both and 'are unknown, we obtain uniformly minimum variance unbiased (UMVU) estimators for them. Moreover, for a sample drawn without replacement from an integer-valued population which is uniformly distributed on consecutive integers with both and unknown, we obtain a UMVU estimator for and apply it in a number serial analysis problem. Finally, under the criterion of mean squared error, one developed estimator is provided.
In this paper we describe a framework for the comparison of a finite set of points with a probability measure if the latter belongs to a simple class. The measure … In this paper we describe a framework for the comparison of a finite set of points with a probability measure if the latter belongs to a simple class. The measure of closeness chosen quantifies the degree of agreement obtained when a prescribed collection of test functions is simultaneously integrated with the respect to the given probability measure, and the set of points (identified with a set of point masses). No specific assumptions are made about the provenance of the point set, although our results have a clear application to certain Markov chain Monte Carlo integration problems.
Let ${\mit\Theta} ^{(n)}$ be a random vector uniformly distributed on the unit sphere $\mathbb S ^{n-1}$ in $\mathbb R^n$. Consider the projection of the uniform distribution on the cube $[-1,1]^n$ … Let ${\mit\Theta} ^{(n)}$ be a random vector uniformly distributed on the unit sphere $\mathbb S ^{n-1}$ in $\mathbb R^n$. Consider the projection of the uniform distribution on the cube $[-1,1]^n$ to the line spanned by ${\mit\Theta} ^{(n)}$. The project
The origin of the notion of approximate derivative and, in turn, approximate continuity is closely related with integration theory. The purpose of this chapter is to recollect the lines of … The origin of the notion of approximate derivative and, in turn, approximate continuity is closely related with integration theory. The purpose of this chapter is to recollect the lines of development of the theory of generalized integration in connection with approximate continuity. Among the results surveyed, some recent results of the authors are mentioned.
The arithmetic function $r_k(n)$ counts the number of ways to write a natural number n as a sum of two kth powers (k ≥ 2 fixed). The investigation of the … The arithmetic function $r_k(n)$ counts the number of ways to write a natural number n as a sum of two kth powers (k ≥ 2 fixed). The investigation of the asymptotic behaviour of the Dirichlet summatory function of $r_k(n)$ leads in a natural way to a cert
What actuaries call cash flow testing is a large-scale simulation pitting a company's current policy obligation against future earnings based on interest rates. While life contingency issues associated with contract … What actuaries call cash flow testing is a large-scale simulation pitting a company's current policy obligation against future earnings based on interest rates. While life contingency issues associated with contract payoff are a mainstay of the actuarial sciences, modeling the random fluctuations of US Treasury rates is less studied. Furthermore, applying standard simulation techniques, such as the Monte Carlo method, to actual multi-billion dollar companies produce a simulation that can be computationally prohibitive. In practice, only hundreds of sample paths can be considered, not the usual hundreds of thousands one might expect for a simulation of this complexity. Hence, insurance companies have a desire to accelerate the convergence of the estimation procedure. The paper reports the results of cash flow testing simulations performed for Conseco L.L.C. using so-called quasi-Monte Carlo techniques. In these, pseudo-random number generation is replaced with deterministic low discrepancy sequences. It was found that by judicious choice of subsequences, that the quasi-Monte Carlo method provided a consistently tighter estimate than the traditional methods for a fixed, small number of sample paths. The techniques used to select these subsequences are discussed.
Abstract Sobol’ sequences are widely used for quasi-Monte Carlo methods that arise in financial applications. Sobol’ sequences have parameter values called direction numbers, which are freely chosen by the user, … Abstract Sobol’ sequences are widely used for quasi-Monte Carlo methods that arise in financial applications. Sobol’ sequences have parameter values called direction numbers, which are freely chosen by the user, so there are several implementations of Sobol’ sequence generators. The aim of this paper is to provide a comparative study of (non-commercial) high-dimensional Sobol’ sequences by calculating financial models. Additionally, we implement the Niederreiter sequence (in base 2) with a slight modification, that is, we reorder the rows of the generating matrices, and analyze and compare it with the Sobol’ sequences.
Matsumoto, Saito, and Matoba recently proposed the Walsh figure of merit (WAFOM), which is a computable criterion for quasi-Monte Carlo point sets using digital nets. Several algorithms have been proposed … Matsumoto, Saito, and Matoba recently proposed the Walsh figure of merit (WAFOM), which is a computable criterion for quasi-Monte Carlo point sets using digital nets. Several algorithms have been proposed for finding low-WAFOM point sets. In the existing algorithms, the number of points is fixed in advance, but extensible point sets are preferred in some applications. In this paper, we propose a random search algorithm for extensible low-WAFOM point sets. For this, we introduce a method that uses lookup tables to compute WAFOM faster. Numerical results show that our extensible low-WAFOM point sets are comparable with Niederreiter--Xing sequences for some low-dimensional and smooth test functions.
Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which … Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which become intractable and hard to approximate in the presence of negative correlations, DPPs offer efficient and exact algorithms for sampling, marginalization, conditioning, and other inference tasks. While they have been studied extensively by mathematicians, giving rise to a deep and beautiful theory, DPPs are relatively new in machine learning. Determinantal Point Processes for Machine Learning provides a comprehensible introduction to DPPs, focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how DPPs can be applied to real-world applications like finding diverse sets of high-quality search results, building informative summaries by selecting diverse sentences from documents, modeling non-overlapping human poses in images or video, and automatically building timelines of important news stories. It presents the general mathematical background to DPPs along with a range of modeling extensions, efficient algorithms, and theoretical results that aim to enable practical modeling and learning.
Financial derivatives have developed rapidly over the past few decades due to their risk-averse nature, with options being the preferred financial derivatives due to their flexible contractual mechanisms, particularly Asian … Financial derivatives have developed rapidly over the past few decades due to their risk-averse nature, with options being the preferred financial derivatives due to their flexible contractual mechanisms, particularly Asian options. The Black–Scholes stock option pricing model is often used in conjunction with Monte Carlo simulations for option pricing. However, the Black–Scholes model assumes that the volatility of asset returns is constant, which does not square with practical financial markets. Additionally, Monte Carlo simulation suffers from slow error convergence. To address these issues, we first correct the asset volatility in the Black–Scholes model using a GARCH model. Then, the low error convergence rate of the Monte Carlo method is improved using variance reduction techniques. Meanwhile, the quasi-Monte Carlo approach based on low discrepancy sequences is used to refine the error convergence rate. We also provide a simulation experiment and result analysis to validate the effectiveness of our proposed method.
Abstract This article introduces the principles underlying the simulation of the probability distributions that typically characterize the life/failure processes of engineered components and systems. The basic procedures for sampling random … Abstract This article introduces the principles underlying the simulation of the probability distributions that typically characterize the life/failure processes of engineered components and systems. The basic procedures for sampling random numbers are reviewed and examples are provided with reference to the uniform, exponential, and Weibull distributions that are among the most commonly used probability distributions in reliability, availability, and safety analysis.
La simulation numerique modelise des phenomenes toujours plus complexes. De tels problemes, souvent de grande dimension, exigent des codes sophistiques et couteux en temps de calcul. Le recours systematique au … La simulation numerique modelise des phenomenes toujours plus complexes. De tels problemes, souvent de grande dimension, exigent des codes sophistiques et couteux en temps de calcul. Le recours systematique au simulateur devient alors illusoire. L'approche privilegiee consiste a definir un nombre reduit de simulations organisees selon un plan d'experiences numeriques a partir duquel une surface de reponse est ajustee pour approcher le simulateur. Nous considerons ici les plans generes par des simulateurs deterministes en phase exploratoire i.e. lorsqu'il n'y a aucune connaissance a priori. Les plans requierent donc certaines proprietes comme le remplissage de l'espace et la bonne repartition des points en projection. Deux indicateurs quantifiant la qualite intrinseque des plans ont ete developpes. Le point essentiel de ce travail concerne un procede de planification basee sur la simulation d'echantillons selon une loi de probabilite par une methode de Monte Carlo par chaines de Markov.
In the perspective of estimating main effects of model inputs, two approaches are studied to iteratively construct replicated designs based on Sobol' sequences. Space-filling properties of the resulting designs are … In the perspective of estimating main effects of model inputs, two approaches are studied to iteratively construct replicated designs based on Sobol' sequences. Space-filling properties of the resulting designs are studied based on two criteria. Dans l'objectif d'estimer les effets principaux des paramètres d'un modèle, nous proposons d'étudier deux approches pour construire itérativement des plans répliqués à partir de séquences de Sobol'. Les propriétés de remplissement de l'espace des plans construits sont étudiées sur la base de deux critères.
Abstract The substantial computational cost of high‐fidelity models in numerical hemodynamics has, so far, relegated their use mainly to offline treatment planning. New breakthroughs in data‐driven architectures and optimization techniques … Abstract The substantial computational cost of high‐fidelity models in numerical hemodynamics has, so far, relegated their use mainly to offline treatment planning. New breakthroughs in data‐driven architectures and optimization techniques for fast surrogate modeling provide an exciting opportunity to overcome these limitations, enabling the use of such technology for time‐critical decisions. We discuss an application to the repair of multiple stenosis in peripheral pulmonary artery disease through either transcatheter pulmonary artery rehabilitation or surgery, where it is of interest to achieve desired pressures and flows at specific locations in the pulmonary artery tree, while minimizing the risk for the patient. Since different degrees of success can be achieved in practice during treatment, we formulate the problem in probability, and solve it through a sample‐based approach. We propose a new offline–online pipeline for probabilistic real‐time treatment planning which combines offline assimilation of boundary conditions, model reduction, and training dataset generation with online estimation of marginal probabilities, possibly conditioned on the degree of augmentation observed in already repaired lesions. Moreover, we propose a new approach for the parametrization of arbitrarily shaped vascular repairs through iterative corrections of a zero‐dimensional approximant. We demonstrate this pipeline for a diseased model of the pulmonary artery tree available through the Vascular Model Repository.
Spatial agent-based models are frequently used to investigate the evolution of solid tumours subject to localized cell-cell interactions and microenvironmental heterogeneity. As spatial genomic, transcriptomic and proteomic technologies gain traction, … Spatial agent-based models are frequently used to investigate the evolution of solid tumours subject to localized cell-cell interactions and microenvironmental heterogeneity. As spatial genomic, transcriptomic and proteomic technologies gain traction, spatial computational models are predicted to become ever more necessary for making sense of complex clinical and experimental data sets, for predicting clinical outcomes, and for optimizing treatment strategies. Here we present a non-technical step by step guide to developing such a model from first principles. Stressing the importance of tailoring the model structure to that of the biological system, we describe methods of increasing complexity, from the basic Eden growth model up to off-lattice simulations with diffusible factors. We examine choices that unavoidably arise in model design, such as implementation, parameterization, visualization and reproducibility. Each topic is illustrated with examples drawn from recent research studies and state of the art modelling platforms. We emphasize the benefits of simpler models that aim to match the complexity of the phenomena of interest, rather than that of the entire biological system. Our guide is aimed at both aspiring modellers and other biologists and oncologists who wish to understand the assumptions and limitations of the models on which major cancer studies now so often depend.
Summary We study the properties of points in [0,1]d generated by applying Hilbert's space filling curve to uniformly distributed points in [0, 1]. For deterministic sampling we obtain a discrepancy … Summary We study the properties of points in [0,1]d generated by applying Hilbert's space filling curve to uniformly distributed points in [0, 1]. For deterministic sampling we obtain a discrepancy of O(n−1/d) for d⩾2. For random stratified sampling, and scrambled van der Corput points, we derive a mean-squared error of O(n−1−2/d) for integration of Lipschitz continuous integrands, when d⩾3. These rates are the same as those obtained by sampling on d-dimensional grids and they show a deterioration with increasing d. The rate for Lipschitz functions is, however, the best possible at that level of smoothness and is better than plain independent and identically distributed sampling. Unlike grids, space filling curve sampling provides points at any desired sample size, and the van der Corput version is extensible in n. We also introduce a class of piecewise Lipschitz functions whose discontinuities are in rectifiable sets described via Minkowski content. Although these functions may have infinite variation in the sense of Hardy and Krause, they can be integrated with a mean-squared error of O(n−1−1/d). It was previously known only that the rate was o(n−1). Other space filling curves, such as those due to Sierpinski and Peano, also attain these rates, whereas upper bounds for the Lebesgue curve are somewhat worse, as if the dimension were log2(3) times as high.
The valuation of large variable annuity (VA) portfolios is an important problem of interest, not only because of its practical relevance but also because of its theoretical significance. This is … The valuation of large variable annuity (VA) portfolios is an important problem of interest, not only because of its practical relevance but also because of its theoretical significance. This is prompted by the phenomenon that many existing sophisticated algorithms are typically efficient at valuing a single VA policy but they are not scalable to valuing large VA portfolios consisting of hundreds of thousands of policies. As a result, this sparks a new research direction exploiting machine learning methods (such as data clustering, nearest neighbor kriging, neural network) on providing more efficient algorithms to estimate the market values and sensitivities of large VA portfolios. The idea underlying these approximation methods is to first determine a set of VA policies that is “representative” of the entire large VA portfolio. Then the values from these representative VA policies are used to estimate the respective values of the entire large VA portfolio. A substantial reduction in computation time is possible because we only need to value the representative set of VA policies, which typically is a much smaller subset of the entire large VA portfolio. Ideally the large VA portfolio valuation method should adequately address issues such as (1) the complexity of the proposed algorithm; (2) the cost of finding representative VA policies; (3) the cost of the initial training set, if any; (4) the cost of estimating the entire large VA portfolio from the representative VA policies; (5) the computer memory constraint; and (6) the portability to other large VA portfolio valuation. Most of the existing large VA portfolio valuation methods do not necessary reflect all of these issues, particularly the property of portability, which ensures that we only need to incur the start-up time once and the same representative VA policies can be recycled to valuing other large portfolios of VA policies. Motivated by their limitations and by exploiting the greater uniformity of the randomized low discrepancy sequence and the Taylor expansion, we show that our proposed method, a green mesh method, addresses all of the above issues. The numerical experiment further highlights its simplicity, efficiency, portability, and, more important, its real-time valuation application.
Probabilistic numerics aims to study numerical algorithms from a stochastic perspective. This field has recently evolved into a surging interdisciplinary research area (between numerical approximation and probability theory) attracting much … Probabilistic numerics aims to study numerical algorithms from a stochastic perspective. This field has recently evolved into a surging interdisciplinary research area (between numerical approximation and probability theory) attracting much attention from the data science community at large. Motivated by this development, we incorporate a stochastic viewpoint into our study of multivariate quasi-interpolation for irregularly spaced data, a subject traditionally investigated in the realm of deterministic function approximation. We first construct quasi-interpolants directly from irregularly spaced data and show their optimality in terms of a certain quadratic functional on a weighted Hilbert space. We then derive the approximation order of our quasi-interpolants via a two-step procedure. In the first step, we approximate a target function by scaled integral operators (with an error term referred to as bias). In the second step, we discretize the underlying convolution integral at the irregularly spaced data sites (with an error term called variance). The final approximation order is obtained as an optimal trade-off between bias and variance. We also show that the scale parameter of the integral operators governs the regularization effect of the quasi-interpolation scheme, and find an optimal parameter value range to fine-tune the subtle balance between bias and variance under some additional assumptions on the distribution of the data sites. It is worth noting that evaluation of integrals is not needed in the implementation of our quasi-interpolation scheme, and that our quasi-interpolants are easy to construct. Numerical simulation results, including approximating the classical bore hole test function in eight dimensional space, provide evidence that our quasi-interpolation scheme is robust and capable of providing accurate generalizations.
Performance of product-form multiclass queuing networks can be determined from normalization constants. For large models, the evaluation of these performance metrics is not possible because of the required amount of … Performance of product-form multiclass queuing networks can be determined from normalization constants. For large models, the evaluation of these performance metrics is not possible because of the required amount of computer resources (either by using normalization constants or by using MVA approaches). Such large models can be evaluated with Monte Carlo summation and integration methods. This article proposes two cluster sampling Monte Carlo techniques to deal with such models. First, for a particular type of network, we propose a variance reduction technique based on antithetic variates. It leads to an improvement of Ross, Tsang and Wang's algorithm which is designed to analyze the same family of models. Second, for a more general class of models, we use a mixture of Monte Carlo and quasi-Monte Carlo methods to improve the estimate with respect to Monte Carlo alone.
We propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set. By a combination of theoretical arguments and extensive numerical experiments we demonstrate that the proposed … We propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set. By a combination of theoretical arguments and extensive numerical experiments we demonstrate that the proposed algorithm significantly outperforms standard deep learning algorithms that are based on randomly chosen training data, for problems in moderately high dimensions. The proposed algorithm provides an efficient method for building inexpensive surrogates for many underlying maps in the context of scientific computing.
Generative moment matching networks (GMMNs) are introduced for generating approximate quasi-random samples from multivariate models with any underlying copula to compute estimates with variance reduction. So far, quasi-random sampling for … Generative moment matching networks (GMMNs) are introduced for generating approximate quasi-random samples from multivariate models with any underlying copula to compute estimates with variance reduction. So far, quasi-random sampling for multivariate distributions required a careful design, exploiting specific properties (such as conditional distributions) of the implied parametric copula or the underlying quasi-Monte Carlo (QMC) point set, and was only tractable for a small number of models. Using GMMNs allows one to construct approximate quasi-random samples for a much larger variety of multivariate distributions without such restrictions, including empirical ones from real data with dependence structures not well captured by parametric copulas. Once trained on pseudo-random samples from a parametric model or on real data, these neural networks only require a multivariate standard uniform randomized QMC point set as input and are thus fast in estimating expectations of interest under dependence with variance reduction. Numerical examples are considered to demonstrate the approach, including applications inspired by risk management practice.
Data for complex plasma–wall interactions require long-running and expensive computer simulations of codes like EIRENE or SOLPS. Furthermore, the number of input parameters is large, which results in a low … Data for complex plasma–wall interactions require long-running and expensive computer simulations of codes like EIRENE or SOLPS. Furthermore, the number of input parameters is large, which results in a low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need to conduct the exploration of this multi-dimensional space using robust analysis tools. We restate the Gaussian-process (GP) method as a Bayesian adaptive exploration method for establishing surrogate surfaces in the variables of interest. On this basis, we complete the analysis by the Student-t process (TP) method in order to improve the robustness of the result with respect to outliers. The most obvious difference between both methods shows up in the marginal likelihood for the hyperparameters of the covariance function where the TP method features a broader marginal probability distribution in the presence of outliers.
To improve the efficiency of Monte Carlo estimation, practitioners are turning to biased Markov chain Monte Carlo procedures that trade off asymptotic exactness for computational speed. The reasoning is sound: … To improve the efficiency of Monte Carlo estimation, practitioners are turning to biased Markov chain Monte Carlo procedures that trade off asymptotic exactness for computational speed. The reasoning is sound: a reduction in variance due to more rapid sampling can outweigh the bias introduced. However, the inexactness creates new challenges for sampler and parameter selection, since standard measures of sample quality like effective sample size do not account for asymptotic bias. To address these challenges, we introduce a new computable quality measure based on Stein's method that quantifies the maximum discrepancy between sample and target expectations over a large class of test functions. We use our tool to compare exact, biased, and deterministic sample sequences and illustrate applications to hyperparameter selection, convergence rate assessment, and quantifying bias-variance tradeoffs in posterior inference.
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the … In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Monte Carlo simulation has become an essential tool for pricing and risk estimation in financial applications. It allows finance professionals to incorporate uncertainty in financial models, and to consider additional … Monte Carlo simulation has become an essential tool for pricing and risk estimation in financial applications. It allows finance professionals to incorporate uncertainty in financial models, and to consider additional layers of complexity that are difficult to incorporate in analytical models. The main idea of Monte Carlo simulation is to represent the uncertainty in market variables through scenarios, and to evaluate parameters of interest that depend on these market variables in complex ways. The advantage of such an approach is that it can easily capture the dynamics of underlying processes and the otherwise complex effects of interactions among market variables. A substantial amount of research in recent years has been dedicated to making scenario generation more accurate and efficient, and a number of sophisticated computational techniques are now available to the financial modeler.
In this paper, we provide a framework to study the dependence structure of sampling schemes such as those produced by randomized quasi-Monte Carlo methods. The main goal of this new … In this paper, we provide a framework to study the dependence structure of sampling schemes such as those produced by randomized quasi-Monte Carlo methods. The main goal of this new framework is to determine conditions under which the negative dependence structure of a sampling scheme enables the construction of estimators with reduced variance compared to Monte Carlo estimators. To do this, we establish a generalization of the well-known Hoeffding’s lemma—expressing the covariance of two random variables as an integral of the difference between their joint distribution function and the product of their marginal distribution functions—that is particularly well suited to study such sampling schemes. We also provide explicit formulas for the joint distribution of pairs of points randomly chosen from a scrambled (0, m, s)-net. In addition, we provide variance bounds establishing the superiority of dependent sampling schemes over Monte Carlo in a few different setups. In particular, we show that a scrambled (0, m, 2)-net yields an estimator with variance no larger than a Monte Carlo estimator for functions monotone in each variable.
Halton sequences ([14], 1960) have always been quite popular with practitioners, in part because of their intuitive denition and ease of implementation. However, in their original form, these sequences have … Halton sequences ([14], 1960) have always been quite popular with practitioners, in part because of their intuitive denition and ease of implementation. However, in their original form, these sequences have also been known for their inadequacy to integrate functions in moderate to large dimensions, in which case (t; s)-sequences like the Sobol’ sequence [32] are usually preferred. To overcome this problem, one possible approach is to include permutations in the denition of Halton sequences| thereby obtaining generalized Halton sequences|an idea that goes back to almost thirty years ago [3, 6], and that has been studied by many researchers in the last few years. In parallel to these eorts, an important improvement in the upper bounds for the discrepancy of Halton sequences has been made by Atanassov in [1]. Together, these two lines of research have revived the interest in Halton sequences, and the aim of this paper is to provide an up-to-date study of dieren t generalized Halton sequences, and compare them through an extensive set of numerical experiments. In addition, we propose a new generalized Halton sequence that appears to be competitive with the best ones proposed so far.
Previous article Next article Linear Recurring SequencesNeal ZierlerNeal Zierlerhttps://doi.org/10.1137/0107003PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout[1] A. A. Albert, Fundamental Concepts of Higher Algebra, Univ. of Chicago Press, 1956 0073.00802 Google … Previous article Next article Linear Recurring SequencesNeal ZierlerNeal Zierlerhttps://doi.org/10.1137/0107003PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout[1] A. A. Albert, Fundamental Concepts of Higher Algebra, Univ. of Chicago Press, 1956 0073.00802 Google Scholar[2] J. L. Brenner, Linear recurrence relations, Amer. Math. Monthly, 61 (1954), 171–173 MR0059934 0055.03508 CrossrefGoogle Scholar[3] W. H. Bussey, Galois field tables for $p^{n}\leqq{169}$, Bull. Amer. Math. Soc., 12 (1905), 22–38 CrossrefGoogle Scholar[4] W. H. Bussey, Galois field tables of order less than 1000, Bull. Amer. Math. Soc., 16 (1909), 188–206 CrossrefGoogle Scholar[5] R. D. Carmichael, A Simple Principle of Unification in the Elementary Theory of Numbers, Amer. Math. Monthly, 36 (1929), 132–143 MR1521682 CrossrefGoogle Scholar[6] Claude Chevalley, Introduction to the Theory of Algebraic Functions of One Variable, Mathematical Surveys, No. VI, American Mathematical Society, New York, N. Y., 1951xi+188 MR0042164 0045.32301 CrossrefGoogle Scholar[7] Randolph Church, Tables of irreducible polynomials for the first four prime moduli, Ann. of Math. (2), 36 (1935), 198–209 MR1503219 0011.00501 CrossrefGoogle Scholar[8] G. E. Forsythe, , H. H. Germond and , A. S. Householder, Monte Carlo Methods, Nat. Bur. Standards. Appl. Math. Ser., 12, 1951 Google Scholar[9] Marshall Hall, An isomorphism between linear recurring sequences and algebraic rings, Trans. Amer. Math. Soc., 44 (1938), 196–218 MR1501967 0019.19301 CrossrefGoogle Scholar[10] C. Jordan, The Calculus of finite differences, Chelsea, New York, 1950 0041.05401 Google Scholar[11] R. Lerner, Signals having uniform ambiguity functions, I.R.E. Convention Record, Information Theory Section, New York, 1958, March Google Scholar[12] Oystein Ore, Contributions to the theory of finite fields, Trans. Amer. Math. Soc., 36 (1934), 243–274 MR1501740 0009.10003 CrossrefGoogle Scholar[13] R. Price and , P. E. Green, Jr., An anti-multipath communication system, Proc. I.R.E., to appear Google Scholar[14] Raphael M. Robinson, Mersenne and Fermat numbers, Proc. Amer. Math. Soc., 5 (1954), 842–846 MR0064787 0058.27504 CrossrefISIGoogle Scholar[15] W. McC. Siebert, A radar detection philosophy, Trans. I.R.E. (Information Theory), IT-2 (1956), 204–221, Sept. 10.1109/TIT.1956.1056805 CrossrefISIGoogle Scholar[16] B. L. van der Waerden, Modern Algebra. Vol. I, Frederick Ungar Publishing Co., New York, N. Y., 1949xii+264, and 1953 MR0029363 0039.00902 Google Scholar[17] Morgan Ward, The arithmetical theory of linear recurring series, Trans. Amer. Math. Soc., 35 (1933), 600–628 MR1501705 0007.24901 CrossrefGoogle Scholar[18] Edwin Weiss and , Neal Zierler, Locally compact division rings, Pacific J. Math., 8 (1958), 369–371 MR0121432 0087.03101 CrossrefGoogle Scholar[19] Neal Zierler, A decomposition theorem for the integers modulo q, Amer. Math. Monthly, 65 (1958), 31–32 MR0098084 0103.27204 CrossrefGoogle Scholar[20] Neal Zierler, On the theorem of Gleason and Marsh, Proc. Amer. Math. Soc., 9 (1958), 236–237 MR0094332 0090.24202 CrossrefGoogle Scholar Previous article Next article FiguresRelatedReferencesCited byDetails The Fast m-Transform: A Fast Computation of Cross-Correlations with Binary m-Sequences13 July 2006 | SIAM Journal on Computing, Vol. 20, No. 4AbstractPDF (798 KB)Doubly-Periodic Sequences and Two-Dimensional RecurrencesSteven Homer and Jerry Goldman2 August 2006 | SIAM Journal on Algebraic Discrete Methods, Vol. 6, No. 3AbstractPDF (1147 KB)Factoring Polynomials over a Finite FieldMichael Willett12 July 2006 | SIAM Journal on Applied Mathematics, Vol. 35, No. 2AbstractPDF (527 KB)The Index of an M-SequenceMichael Willett28 July 2006 | SIAM Journal on Applied Mathematics, Vol. 25, No. 1AbstractPDF (328 KB)On the Distribution of the Coefficients of Some PolynomialsNazmi M. Shehadeh28 July 2006 | SIAM Journal on Applied Mathematics, Vol. 16, No. 5AbstractPDF (444 KB)Linear Recursive Sequences18 July 2006 | SIAM Review, Vol. 10, No. 3AbstractPDF (1184 KB)A New Class of Cyclic CodesDouglas R. Anderson12 July 2006 | SIAM Journal on Applied Mathematics, Vol. 16, No. 1AbstractPDF (1580 KB)Characteristic Linear Sequences and Their Coset FunctionsRobert Gold13 July 2006 | SIAM Journal on Applied Mathematics, Vol. 14, No. 5AbstractPDF (606 KB)Linear Codes of Constant WeightE. Weiss13 July 2006 | SIAM Journal on Applied Mathematics, Vol. 14, No. 1AbstractPDF (561 KB)Simultaneous Error-Correction and Burst-Error Detection Using Binary Linear Cyclic CodesEdward C. Posner13 July 2006 | Journal of the Society for Industrial and Applied Mathematics, Vol. 13, No. 4AbstractPDF (962 KB)Some Periodicity Properties of Transformations on Vector Spaces Over Residue Class RingsDorothy A. Bollman13 July 2006 | Journal of the Society for Industrial and Applied Mathematics, Vol. 13, No. 3AbstractPDF (716 KB)Random Number Generators1 August 2006 | SIAM Review, Vol. 4, No. 3AbstractPDF (3435 KB)A Class of Error-Correcting Codes in $p^m $ SymbolsDaniel Gorenstein and Neal Zierler10 July 2006 | Journal of the Society for Industrial and Applied Mathematics, Vol. 9, No. 2AbstractPDF (691 KB)Polynomial Codes Over Certain Finite FieldsI. S. Reed and G. Solomon10 July 2006 | Journal of the Society for Industrial and Applied Mathematics, Vol. 8, No. 2AbstractPDF (430 KB) Volume 7, Issue 1| 1959Journal of the Society for Industrial and Applied Mathematics History Submitted:19 December 1957Published online:10 July 2006 InformationCopyright © 1959 Society for Industrial and Applied MathematicsPDF Download Article & Publication DataArticle DOI:10.1137/0107003Article page range:pp. 31-48ISSN (print):0368-4245ISSN (online):2168-3484Publisher:Society for Industrial and Applied Mathematics