Author Description

Login to generate an author description

Ask a Question About This Mathematician

This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where … This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst-case error as s increases. A crucial role is played by parameters called ‘weights’, since a careful use of the weight parameters is needed to ensure that the worst-case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy, all of which are discussed with an appropriate level of detail.
An algorithm to generate Sobol' sequences to approximate integrals in up to 40 dimensions has been previously given by Bratley and Fox in Algorithm 659. Here, we provide more primitive … An algorithm to generate Sobol' sequences to approximate integrals in up to 40 dimensions has been previously given by Bratley and Fox in Algorithm 659. Here, we provide more primitive polynomials and "direction numbers" so as to allow the generation of Sobol' sequences to approximate integrals in up to 1111 dimensions. The direction numbers given generate Sobol' sequences that satisfy Sobol's so-called Property A.
Direction numbers for generating Sobol$'$ sequences that satisfy the so-called Property A in up to 1111 dimensions have previously been given in Joe and Kuo [ACM Trans. Math. Software, 29 … Direction numbers for generating Sobol$'$ sequences that satisfy the so-called Property A in up to 1111 dimensions have previously been given in Joe and Kuo [ACM Trans. Math. Software, 29 (2003), pp. 49–57]. However, these Sobol$'$ sequences may have poor two-dimensional projections. Here we provide a new set of direction numbers alleviating this problem. These are obtained by treating Sobol$'$ sequences in d dimensions as $(t,d)$-sequences and then optimizing the t-values of the two-dimensional projections. Our target dimension is 21201.
In this paper quasi-Monte Carlo (QMC) methods are applied to a class of elliptic partial differential equations (PDEs) with random coefficients, where the random coefficient is parametrized by a countably … In this paper quasi-Monte Carlo (QMC) methods are applied to a class of elliptic partial differential equations (PDEs) with random coefficients, where the random coefficient is parametrized by a countably infinite number of terms in a Karhunen--Loève expansion. Models of this kind appear frequently in numerical models of physical systems, and in uncertainty quantification. The method uses a QMC method to estimate expected values of linear functionals of the exact or approximate solution of the PDE, with the expected value considered as an infinite dimensional integral in the parameter space corresponding to the randomness induced by the random coefficient. The error analysis, arguably the first rigorous application of the modern theory of QMC in weighted spaces, exploits the regularity with respect to both the physical variables (the variables in the physical domain) and the parametric variables (the parameters corresponding to randomness). In the weighted-space theory of QMC methods, “weights,” describing the varying difficulty of different subsets of the variables, are introduced in order to make sure that the high-dimensional integration problem is tractable. It turns out that the weights arising from the present analysis are of a nonstandard kind, being of neither product nor order dependent form, but instead a hybrid of the two---we refer to these as “product and order dependent weights,” or “POD weights” for short. These POD weights are of a simple enough form to permit a component-by-component construction of a randomly shifted lattice rule that has optimal convergence properties for the given weighted space setting. If the terms in the expansion for the random coefficient have an appropriate decay property, and if we choose POD weights that minimize a certain upper bound on the error, then the solution of the PDE belongs to the joint function space needed for the analysis, and the QMC error (in the sense of a root-mean-square error averaged over shifts) is of order $\mathcal{O}(N^{-1+\delta})$ for arbitrary $\delta>0$, where $N$ denotes the number of sampling points in the parameter space. Moreover, this convergence rate and slower convergence rates under more relaxed conditions are achieved under conditions similar to those found recently by Cohen, De Vore, and Schwab [Found. Comput. Math., 10 (2010), pp. 615--646] for the same model by best $N$-term approximation. We analyze the impact of a finite element (FE) discretization on the overall efficiency of the scheme, in terms of accuracy versus overall cost, with results that are comparable to those of the best $N$-term approximation.
We present formulas that allow us to decompose a function $f$ of $d$ variables into a sum of $2^d$ terms $f_{\mathbf {u}}$ indexed by subsets $\mathbf {u}$ of $\{1,\ldots ,d\}$, … We present formulas that allow us to decompose a function $f$ of $d$ variables into a sum of $2^d$ terms $f_{\mathbf {u}}$ indexed by subsets $\mathbf {u}$ of $\{1,\ldots ,d\}$, where each term $f_{\mathbf {u}}$ depends only on the variables with indices in $\mathbf {u}$. The decomposition depends on the choice of $d$ commuting projections $\{P_j\}_{j=1}^d$, where $P_j(f)$ does not depend on the variable $x_j$. We present an explicit formula for $f_{\mathbf {u}}$, which is new even for the
Shifted rank-1 lattice rules, a special class of quasi-Monte Carlo methods, have recently been proposed by the present authors for the integration of functions belonging to certain "weighted" Sobolev spaces. … Shifted rank-1 lattice rules, a special class of quasi-Monte Carlo methods, have recently been proposed by the present authors for the integration of functions belonging to certain "weighted" Sobolev spaces. The shifts in these rules were generated in a deterministic manner. In contrast, in this paper we generate these shifts randomly. This allows probabilistic estimates for the error in a given integral. It also reduces the number of operations required to find the generating vectors for the underlying lattice rules component-by-component. The rules thus constructed achieve a worst-case strong tractability error bound in an average or probabilistic sense.
Lattice rules are a family of equal‐weight cubature formulae for approximating high‐dimensional integrals. By now it is well established that good generating vectors for lattice rules having n points can … Lattice rules are a family of equal‐weight cubature formulae for approximating high‐dimensional integrals. By now it is well established that good generating vectors for lattice rules having n points can be constructed component‐by‐component for integrands belonging to certain weighted function spaces, and that they can achieve the optimal rate of convergence. Although the lattice rules constructed this way are extensible in dimension, they are not extensible in n; thus when n is changed the generating vector needs to be constructed anew. In this paper we introduce a new algorithm for constructing good generating vectors for embedded lattice rules which can be used for a range of n while still being extensible in dimension. By using an adaptation of the fast component‐by‐component construction algorithm (which makes use of fast Fourier transforms), we are able to obtain good generating vectors for thousands of dimensions and millions of points, under both product weight and order‐dependent weight settings, at the cost of $\calO(d n (\log (n))^2)$ operations. With a sufficiently large number of points and good overall quality, these embedded lattice rules can be used for practical purposes in the same way as a low‐discrepancy sequence. We show for a range of weight settings in an unanchored Sobolev space that our embedded lattice rules achieve the same (optimal) rate of convergence $\calO(n^{-1+\delta})$, $\delta>0$, as those constructed for a fixed number of points, and that the implied constant gains only a factor of 1.30 to 1.55.
We develop and justify an algorithm for the construction of quasi–Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank-1 lattice rules. The … We develop and justify an algorithm for the construction of quasi–Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank-1 lattice rules. The parameters characterising the shifted lattice rule are found "component-by-component": the ($d+1$)-th component of the generator vector and the shift are obtained by successive $1$-dimensional searches, with the previous $d$ components kept unchanged. The rules constructed in this way are shown to achieve a strong tractability error bound in weighted Sobolev spaces. A search for $n$-point rules with $n$ prime and all dimensions 1 to $d$ requires a total cost of $O(n^3d^2)$ operations. This may be reduced to $O(n^3d)$ operations at the expense of $O(n^2)$ storage. Numerical values of parameters and worst-case errors are given for dimensions up to 40 and $n$ up to a few thousand. The worst-case errors for these rules are found to be much smaller than the theoretical bounds.
Abstract This paper is a contemporary review of quasi-Monte Carlo (QMC) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s . … Abstract This paper is a contemporary review of quasi-Monte Carlo (QMC) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s . It first introduces the by-now standard setting of weighted Hilbert spaces of functions with square-integrable mixed first derivatives, and then indicates alternative settings, such as non-Hilbert spaces, that can sometimes be more suitable. Original contributions include the extension of the fast component-by-component (CBC) construction of lattice rules that achieve the optimal convergence order (a rate of almost 1/ N , where N is the number of points, independently of dimension) to so-called “product and order dependent” (POD) weights, as seen in some recent applications. Although the paper has a strong focus on lattice rules, the function space settings are applicable to all QMC methods. Furthermore, the error analysis and construction of lattice rules can be adapted to polynomial lattice rules from the family of digital nets.
We construct quasi--Monte Carlo methods to approximate the expected values of linear functionals of Petrov--Galerkin discretizations of parametric operator equations which depend on a possibly infinite sequence of parameters. Such … We construct quasi--Monte Carlo methods to approximate the expected values of linear functionals of Petrov--Galerkin discretizations of parametric operator equations which depend on a possibly infinite sequence of parameters. Such problems arise in the numerical solution of differential and integral equations with random field inputs. We analyze the regularity of the solutions with respect to the parameters in terms of the rate of decay of the fluctuations of the input field. If $p\in (0,1]$ denotes the “summability exponent” corresponding to the fluctuations in affine-parametric families of operators, then we prove that deterministic “interlaced polynomial lattice rules” of order $\alpha = \lfloor 1/p \rfloor+1$ in $s$ dimensions with $N$ points can be constructed using a fast component-by-component algorithm, in $\mathcal{O}(\alpha\,s\, N\log N + \alpha^2\,s^2 N)$ operations, to achieve a convergence rate of $\mathcal{O}(N^{-1/p})$, with the implied constant independent of $s$. This dimension-independent convergence rate is superior to the rate $\mathcal{O}(N^{-1/p+1/2})$ for $2/3\leq p\leq 1$, which was recently established for randomly shifted lattice rules under comparable assumptions. In our analysis we use a nonstandard Banach space setting and introduce “smoothness-driven product and order dependent” weights for which we develop a new fast component-by-component construction.
We introduce a new construction algorithm for digital nets for integration in certain weighted tensor product Hilbert spaces. The first weighted Hilbert space we consider is based on Walsh functions. … We introduce a new construction algorithm for digital nets for integration in certain weighted tensor product Hilbert spaces. The first weighted Hilbert space we consider is based on Walsh functions. Dick and Pillichshammer calculated the worst-case error for integration using digital nets for this space. Here we extend this result to a special construction method for digital nets based on polynomials over finite fields. This result allows us to find polynomials which yield a small worst-case error by computer search. We prove an upper bound on the worst-case error for digital nets obtained by such a search algorithm which shows that the convergence rate is best possible and that strong tractability holds under some condition on the weights. We extend the results for the weighted Hilbert space based on Walsh functions to weighted Sobolev spaces. In this case we use randomly digitally shifted digital nets. The construction principle is the same as before, only the worst-case error is slightly different. Again digital nets obtained from our search algorithm yield a worst-case error achieving the optimal rate of convergence and as before strong tractability holds under some condition on the weights. These results show that such a construction of digital nets yields the until now best known results of this kind and that our construction methods are comparable to the construction methods known for lattice rules. We conclude the article with numerical results comparing the expected worst-case error for randomly digitally shifted digital nets with those for randomly shifted lattice rules.
In this paper we present a rigorous cost and error analysis of a multilevel estimator based on randomly shifted Quasi-Monte Carlo (QMC) lattice rules for lognormal diffusion problems. These problems … In this paper we present a rigorous cost and error analysis of a multilevel estimator based on randomly shifted Quasi-Monte Carlo (QMC) lattice rules for lognormal diffusion problems. These problems are motivated by uncertainty quantification problems in subsurface flow. We extend the convergence analysis in [Graham et al., Numer. Math. 2014] to multilevel Quasi-Monte Carlo finite element discretisations and give a constructive proof of the dimension-independent convergence of the QMC rules. More precisely, we provide suitable parameters for the construction of such rules that yield the required variance reduction for the multilevel scheme to achieve an $\varepsilon$-error with a cost of $\mathcal {O}(\varepsilon ^{-\theta })$ with $\theta < 2$, and in practice even $\theta \approx 1$, for sufficiently fast decaying covariance kernels of the underlying Gaussian random field inputs. This confirms that the computational gains due to the application of multilevel sampling methods and the gains due to the application of QMC methods, both demonstrated in earlier works for the same model problem, are complementary. A series of numerical experiments confirms these gains. The results show that in practice the multilevel QMC method consistently outperforms both the multilevel MC method and the single-level variants even for nonsmooth problems.
A standard problem in uncertainty quantification and in computational statistics is the sampling of stationary Gaussian random fields with given covariance in a $d$-dimensional (physical) domain. In many applications it … A standard problem in uncertainty quantification and in computational statistics is the sampling of stationary Gaussian random fields with given covariance in a $d$-dimensional (physical) domain. In many applications it is sufficient to perform the sampling on a regular grid on a cube enclosing the physical domain, in which case the corresponding covariance matrix is nested block Toeplitz. After extension to a nested block circulant matrix, this can be diagonalized by FFT---the "circulant embedding method." Provided the circulant matrix is positive definite, this provides a finite expansion of the field in terms of a deterministic basis, with coefficients given by i.i.d. standard normals. In this paper we prove, under mild conditions, that the positive definiteness of the circulant matrix is always guaranteed, provided the enclosing cube is sufficiently large. We examine in detail the case of the Matérn covariance, and prove (for fixed correlation length) that, as $h_0\rightarrow 0$, positive definiteness is guaranteed when the random field is sampled on a cube of size order $(1 + \nu^{1/2} \log h_0^{-1})$ times larger than the size of the physical domain. (Here $h_0$ is the mesh spacing of the regular grid and $\nu$ the Matérn smoothness parameter.) We show that the sampling cube can become smaller as the correlation length decreases when $h_0$ and $\nu$ are fixed. Our results are confirmed by numerical experiments. We prove several results about the decay of the eigenvalues of the circulant matrix. These lead to the conjecture, verified by numerical experiment, that they decay with the same rate as the Karhunen--Loève eigenvalues of the covariance operator. The method analyzed here complements the numerical experiments for uncertainty quantification in porous media problems in an earlier paper by the same authors in J. Comput. Phys., 230 (2011), pp. 3668--3694.
We develop a convergence analysis of a multilevel algorithm combining higher order quasi--Monte Carlo (QMC) quadratures with general Petrov--Galerkin discretizations of countably affine parametric operator equations of elliptic and parabolic … We develop a convergence analysis of a multilevel algorithm combining higher order quasi--Monte Carlo (QMC) quadratures with general Petrov--Galerkin discretizations of countably affine parametric operator equations of elliptic and parabolic types, extending both the multilevel first order analysis in [F. Y. Kuo, Ch. Schwab, and I. H. Sloan, Found. Comput. Math., 15 (2015), pp. 411--449] and the single level higher order analysis in [J. Dick et al., SIAM J. Numer. Anal., 52 (2014), pp. 2676--2702]. We cover, in particular, both definite as well as indefinite strongly elliptic systems of partial differential equations (PDEs) in nonsmooth domains, and we discuss in detail the impact of higher order derivatives of Karhunen--Loève eigenfunctions in the parametrization of random PDE inputs on the convergence results. Based on our a priori error bounds, concrete choices of algorithm parameters are proposed in order to achieve a prescribed accuracy under minimal computational work. Problem classes and sufficient conditions on data are identified where multilevel higher order QMC Petrov--Galerkin algorithms outperform the corresponding single level versions of these algorithms. Numerical experiments confirm the theoretical results.
This paper provides the theoretical foundation for the component-by-component (CBC) construction of randomly shifted lattice rules that are tailored to integrals over Rs arising from practical applications. For an integral … This paper provides the theoretical foundation for the component-by-component (CBC) construction of randomly shifted lattice rules that are tailored to integrals over Rs arising from practical applications. For an integral of the form ∫Rsf(y)∏j=1sϕ(yj)dy with a univariate probability density ϕ, our general strategy is to first map the integral into the unit cube [0,1]s using the inverse of the cumulative distribution function of ϕ, and then apply quasi-Monte Carlo (QMC) methods. However, the transformed integrand in the unit cube rarely falls within the standard QMC setting of Sobolev spaces of functions with mixed first derivatives. Therefore, a non-standard function space setting for integrands over Rs, previously considered by Kuo, Sloan, Wasilkowski and Waterhouse (2010), is required for the analysis. Motivated by the needs of three applications, the present paper extends the theory of the aforementioned paper in several non-trivial directions, including a new error analysis for the CBC construction of lattice rules with general non-product weights, the introduction of an unanchored variant for the setting, the use of coordinate-dependent weight functions in the norm, and the strategy for fast CBC construction with POD ("product and order dependent") weights.
Monte Carlo methods are used extensively in computational finance to estimate the price of financial derivative options. We review the use of quasi-Monte Carlo methods to obtain the same accuracy … Monte Carlo methods are used extensively in computational finance to estimate the price of financial derivative options. We review the use of quasi-Monte Carlo methods to obtain the same accuracy at a much lower computational cost, and focus on three key ingredients: the generation of Sobol' and lattice points, reduction of effective dimension using the principal component analysis approach at full potential, and randomization by shifting or digital shifting to give an unbiased estimator with a confidence interval. Our aim is to provide a starting point for finance practitioners new to quasi-Monte Carlo methods. References P. Acworth, M. Broadie, and P. Glasserman, A comparison of some Monte Carlo and quasi-Monte Carlo techniques for option pricing, in: Monte Carlo and quasi-Monte Carlo methods 1996 (P. Hellekalek, G. Larcher, H. Niederreiter, and P. Zinterhof, eds.), Springer Verlag, Berlin, 1--18 (1998). J. Baldeaux, qmc for finance beyond Black-Scholes, submitted to ANZIAM J. Proc. CTAC 2008. R. E. Caflisch, W. Morokoff, and A. B. Owen, Valuation of mortgage-backed securities using Brownian bridges to reduce effective dimension, J. Comp. Finance 1, 27--46 (1997). http://www.thejournalofcomputationalfinance.com/public/showPage.html?page=919 R. Cools, F. Y. Kuo, and D. Nuyens, Constructing embedded lattice rules for multivariate integration, SIAM J. Sci. Comput. 28, 2162--2188 (2006). doi:10.1137/06065074X J. Dick, F. Pillichshammer, and B. J. Waterhouse, The construction of good extensible rank-$1$ lattices, Math. Comp. 77, 2345--2373 (2008). doi:10.1090/S0025-5718-08-02009-7 P. L'Ecuyer, Quasi-Monte Carlo methods in finance, in: Proceedings of the 2004 Winter Simulation Conference (R. G. Ingalls, M.D. Rossetti, J. S. Smith,and B. A. Peters, eds.), IEEE Computer Society Press, Los Alamitos, 1645--1655 (2004). doi:10.1109/WSC.2004.1371512 M. B. Giles and B. J. Waterhouse, Multilevel quasi-Monte Carlo path simulation, in preparation. P. Glasserman, Monte Carlo methods in financial engineering, Springer--Verlag, New York, 2004. S. Joe and F. Y. Kuo, Constructing Sobol' sequences with better two-dimensional projections, SIAM J. Sci. Comput. 30, 2635--2654 (2008). doi:10.1137/070709359 J. Keiner and B. J. Waterhouse, Fast implementation of the pca method for finance problems with unequal time steps, in preparation. F. Y. Kuo and I. H. Sloan, Lifting the curse of dimensionality, Notices Amer. Math. Soc. 52, 1320--1329 (2005). http://www.ams.org/notices/200511/index.html H. Niederreiter, Random number generation and quasi-Monte Carlo methods, SIAM, Philadelphia, 1992. D. Nuyens and B. J. Waterhouse, Adaptive quasi-Monte Carlo in finance, in preparation. K. Scheicher, Complexity and effective dimension of discrete Levy areas, J. Complexity 23, 152--168 (2007). doi:10.1016/j.jco.2006.12.006 I. H. Sloan, X. Wang, and H. Wozniakowski, Finite-order weights imply tractability of multivariate integration, J. Complexity 20, 46--74 (2004). doi:10.1016/j.jco.2003.11.003 X. Wang and I. H. Sloan, Quasi-Monte Carlo methods in financial enginnering: an equivalence principle and dimension reduction, in preparation.
This is a note on Math. Comp. 82 (2013), 383–400. We first report a mistake, in that the main result Theorem 3.1, though correct, does not as claimed apply to … This is a note on Math. Comp. 82 (2013), 383–400. We first report a mistake, in that the main result Theorem 3.1, though correct, does not as claimed apply to the Asian option pricing problem. This is because assumption (3.3) in the theorem is not satisfied by the Asian option pricing problem. In this note we present a strengthened theorem, which removes that assumption. The new theorem is immediately applicable to the Asian option pricing problem with the standard and Brownian bridge constructions. Thus the option pricing conclusions of our original paper stand.
In a previous paper (Graham et al. in J Comput Phys 230:3668-3694, 2011), the authors proposed a new practical method for computing expected values of functionals of solutions for certain … In a previous paper (Graham et al. in J Comput Phys 230:3668-3694, 2011), the authors proposed a new practical method for computing expected values of functionals of solutions for certain classes of elliptic partial differential equations with random coefficients. This method was based on combining quasi-Monte Carlo (QMC) methods for computing the expected values with circulant embedding methods for sampling the random field on a regular grid. It was found capable of handling fluid flow problems in random heterogeneous media with high stochastic dimension, but no convergence theory was provided. This paper provides a convergence analysis for the method in the case when the QMC method is a specially designed randomly shifted lattice rule. The convergence result depends on the eigenvalues of the underlying nested block circulant matrix and can be independent of the number of stochastic variables under certain assumptions. In fact the QMC analysis applies to general factorisations of the covariance matrix to sample the random field. The error analysis for the underlying fully discrete finite element method allows for locally refined meshes (via interpolation from a regular sampling grid of the random field). Numerical results on a non-regular domain with corner singularities in two spatial dimensions and on a regular domain in three spatial dimensions are included.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 22 October 2019Accepted: 03 February 2021Published online: 08 April 2021Keywordsoptimal control, uncertainty quantification, … Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 22 October 2019Accepted: 03 February 2021Published online: 08 April 2021Keywordsoptimal control, uncertainty quantification, quasi-Monte Carlo method, PDE-constrained optimization with uncertain coefficients, optimization under uncertaintyAMS Subject Headings49J20, 65D30, 65D32Publication DataISSN (online): 2166-2525Publisher: Society for Industrial and Applied MathematicsCODEN: sjuqa3
The construction of randomly shifted rank-<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="1"> <mml:semantics> <mml:mn>1</mml:mn> <mml:annotation encoding="application/x-tex">1</mml:annotation> </mml:semantics> </mml:math> </inline-formula> lattice rules, where the number of points <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n"> <mml:semantics> … The construction of randomly shifted rank-<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="1"> <mml:semantics> <mml:mn>1</mml:mn> <mml:annotation encoding="application/x-tex">1</mml:annotation> </mml:semantics> </mml:math> </inline-formula> lattice rules, where the number of points <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n"> <mml:semantics> <mml:mi>n</mml:mi> <mml:annotation encoding="application/x-tex">n</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is a prime number, has recently been developed by Sloan, Kuo and Joe for integration of functions in weighted Sobolev spaces and was extended by Kuo and Joe and by Dick to composite numbers. To construct <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d"> <mml:semantics> <mml:mi>d</mml:mi> <mml:annotation encoding="application/x-tex">d</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-dimensional rules, the shifts were generated randomly and the generating vectors were constructed component-by-component at a cost of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper O left-parenthesis n squared d squared right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>n</mml:mi> <mml:mn>2</mml:mn> </mml:msup> <mml:msup> <mml:mi>d</mml:mi> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">O(n^2d^2)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> operations. Here we consider the situation where <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n"> <mml:semantics> <mml:mi>n</mml:mi> <mml:annotation encoding="application/x-tex">n</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is the product of two distinct prime numbers <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p"> <mml:semantics> <mml:mi>p</mml:mi> <mml:annotation encoding="application/x-tex">p</mml:annotation> </mml:semantics> </mml:math> </inline-formula> and <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="q"> <mml:semantics> <mml:mi>q</mml:mi> <mml:annotation encoding="application/x-tex">q</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. We still generate the shifts randomly but we modify the algorithm so that the cost of constructing the, now two, generating vectors component-by-component is only <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper O left-parenthesis n left-parenthesis p plus q right-parenthesis d squared right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>p</mml:mi> <mml:mo>+</mml:mo> <mml:mi>q</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:msup> <mml:mi>d</mml:mi> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">O(n(p+q)d^2)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> operations. This reduction in cost allows, in practice, construction of rules with millions of points. The rules constructed again achieve a worst-case strong tractability error bound, with a rate of convergence <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper O left-parenthesis p Superscript negative 1 plus delta Baseline q Superscript negative 1 slash 2 Baseline right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>p</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> <mml:mo>+</mml:mo> <mml:mi>δ<!-- δ --></mml:mi> </mml:mrow> </mml:msup> <mml:msup> <mml:mi>q</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">O(p^{-1+\delta }q^{-1/2})</mml:annotation> </mml:semantics> </mml:math> </inline-formula> for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="delta greater-than 0"> <mml:semantics> <mml:mrow> <mml:mi>δ<!-- δ --></mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>0</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">\delta &gt;0</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
Quasi-Monte Carlo (QMC) methods are applied to multi-level Finite Element (FE) discretizations of elliptic partial differential equations (PDEs) with a random coefficient, to estimate expected values of linear functionals of … Quasi-Monte Carlo (QMC) methods are applied to multi-level Finite Element (FE) discretizations of elliptic partial differential equations (PDEs) with a random coefficient, to estimate expected values of linear functionals of the solution. The expected value is considered as an infinite-dimensional integral in the parameter space corresponding to the randomness induced by the random coefficient. We use a multi-level algorithm, with the number of QMC points depending on the discretization level, and with a level-dependent dimension truncation strategy. In some scenarios, we show that the overall error is $\mathcal{O}(h^2)$, where $h$ is the finest FE mesh width, or $\mathcal{O}(N^{-1+\delta})$ for arbitrary $\delta>0$, where $N$ is the maximal number of QMC sampling points. For these scenarios, the total work is essentially of the order of one single PDE solve at the finest FE discretization level. The analysis exploits regularity of the parametric solution with respect to both the physical variables (the variables in the physical domain) and the parametric variables (the parameters corresponding to randomness). Families of QMC rules with POD weights (product and order dependent weights) which quantify the relative importance of subsets of the variables are found to be natural for proving convergence rates of QMC errors that are independent of the number of parametric variables.
Quasi--Monte Carlo (QMC) rules $1/N \sum_{n=0}^{N-1} f(\boldsymbol{y}_n A)$ can be used to approximate integrals of the form $\int_{[0,1]^s} f(\boldsymbol{y} A) \,\mathrm{d} \boldsymbol{y}$, where $A$ is a matrix and $\boldsymbol{y}$ is … Quasi--Monte Carlo (QMC) rules $1/N \sum_{n=0}^{N-1} f(\boldsymbol{y}_n A)$ can be used to approximate integrals of the form $\int_{[0,1]^s} f(\boldsymbol{y} A) \,\mathrm{d} \boldsymbol{y}$, where $A$ is a matrix and $\boldsymbol{y}$ is a row vector. This type of integral arises, for example, from the simulation of a normal distribution with a general covariance matrix, from the approximation of the expectation value of solutions of PDEs with random coefficients, or from applications from statistics. In this paper we design QMC quadrature points $\boldsymbol{y}_0, \ldots, \boldsymbol{y}_{N-1} \in [0,1]^s$ such that for the matrix $Y = (\boldsymbol{y}_{0}^\top, \ldots, \boldsymbol{y}_{N-1}^\top)^\top$ whose rows are the quadrature points, one can use the fast Fourier transform to compute the matrix-vector product $Y \boldsymbol{a}^\top$, $\boldsymbol{a} \in \mathbb{R}^s$, in $\mathcal{O}(N \log N)$ operations and at most $2(s-1)$ extra additions. The proposed method can be applied to lattice rules, polynomial lattice rules, and a certain type of Korobov $p$-set and even works if the point set $\boldsymbol{y}_0, \ldots, \boldsymbol{y}_{N-1}$ is transformed to another domain $U \subseteq \mathbb{R}^s$ by a coordinatewise mapping $\phi$ which is the same in each coordinate. The approach is illustrated computationally by three numerical experiments. The first test considers the generation of points with normal distribution and a general covariance matrix, the second test applies QMC to high-dimensional, affine-parametric, elliptic PDEs with uniformly distributed random coefficients, and the third test addresses finite element discretizations of elliptic PDEs with high-dimensional, log-normal random input data. All numerical tests show a significant speedup of the computation times of the fast QMC matrix method compared to a conventional implementation as the dimension becomes large.
We consider rank-$1$ lattices for integration and reconstruction of functions with series expansion supported on a finite index set. We explore the connection between the periodic Fourier space and the … We consider rank-$1$ lattices for integration and reconstruction of functions with series expansion supported on a finite index set. We explore the connection between the periodic Fourier space and the non-periodic cosine space and Chebyshev space, via tent transform and then cosine transform, to transfer known results from the periodic setting into new insights for the non-periodic settings. Fast discrete cosine transform can be applied for the reconstruction phase. To reduce the size of the auxiliary index set in the associated component-by-component (CBC) construction for the lattice generating vectors, we work with a bi-orthonormal set of basis functions, leading to three methods for function reconstruction in the non-periodic settings. We provide new theory and efficient algorithmic strategies for the CBC construction. We also interpret our results in the context of general function approximation and discrete least-squares approximation.
Due to the booming development of the internet, people can easily purchase a life insurance product on their mobile devices. However, it is easy for people to overlook whether their … Due to the booming development of the internet, people can easily purchase a life insurance product on their mobile devices. However, it is easy for people to overlook whether their own situation is suitable for their long-term payment ability. Midway surrender will inevitably damage the policyholder’s economic and credit situation. For insurance companies, predicting the credit of each policyholder is an important step in risk management to avoid encountering the worst-case scenario of policyholders withdrawing their insurance midway. This study applied a genetic algorithm-based BP neural network to build a model for predicting policyholder behavior. It integrates the optimal implementation results into a policyholder behavior prediction system. This system could be utilized by insurance company employees for internal purposes and to assist users in making decisions on whether to underwrite while ensuring data security. This research can improve insurance companies’ risk management, promote data security, and serve as a case study for the academic community on the application of genetic algorithms and neural networks in the insurance industry. The contribution of this research is beneficial to assist insurance companies in better risk management, promote data security, and serve as a case study for the academic community on the applications of the genetic algorithms and the neural networks.
We analyse and implement a quasi-Monte Carlo (QMC) finite element method (FEM) for the forward problem of uncertainty quantification (UQ) for the Helmholtz equation with random coefficients, both in the … We analyse and implement a quasi-Monte Carlo (QMC) finite element method (FEM) for the forward problem of uncertainty quantification (UQ) for the Helmholtz equation with random coefficients, both in the second-order and zero-order terms of the equation, thus modelling wave scattering in random media. The problem is formulated on the infinite propagation domain, after scattering by the heterogeneity, and also (possibly) a bounded impenetrable scatterer. The spatial discretization scheme includes truncation to a bounded domain via a perfectly matched layer (PML) technique and then FEM approximation. A special case is the problem of an incident plane wave being scattered by a bounded sound-soft impenetrable obstacle surrounded by a random heterogeneous medium, or more simply, just scattering by the random medium. The random coefficients are assumed to be affine separable expansions with infinitely many independent uniformly distributed and bounded random parameters. As quantities of interest for the UQ, we consider the expectation of general linear functionals of the solution, with a special case being the far-field pattern of the scattered field. The numerical method consists of (a) dimension truncation in parameter space, (b) application of an adapted QMC method to compute expected values, and (c) computation of samples of the PDE solution via PML truncation and FEM approximation. Our error estimates are explicit in $s$ (the dimension truncation parameter), $N$ (the number of QMC points), $h$ (the FEM grid size) and (most importantly), $k$ (the Helmholtz wavenumber). The method is also exponentially accurate with respect to the PML truncation radius. Illustrative numerical experiments are given.
In this paper we consider Deep Neural Networks (DNNs) with a smooth activation function as surrogates for high-dimensional functions that are somewhat smooth but costly to evaluate. We consider the … In this paper we consider Deep Neural Networks (DNNs) with a smooth activation function as surrogates for high-dimensional functions that are somewhat smooth but costly to evaluate. We consider the standard (non-periodic) DNNs as well as propose a new model of periodic DNNs which are especially suited for a class of periodic target functions when Quasi-Monte Carlo lattice points are used as training points. We study the regularity of DNNs by obtaining explicit bounds on all mixed derivatives with respect to the input parameters. The bounds depend on the neural network parameters as well as the choice of activation function. By imposing restrictions on the network parameters to match the regularity features of the target functions, we prove that DNNs with $N$ tailor-constructed lattice training points can achieve the generalization error (or $L_2$ approximation error) bound ${\tt tol} + \mathcal{O}(N^{-r/2})$, where ${\tt tol}\in (0,1)$ is the tolerance achieved by the training error in practice, and $r = 1/p^*$, with $p^*$ being the ``summability exponent'' of a sequence that characterises the decay of the input variables in the target functions, and with the implied constant independent of the dimensionality of the input data. We apply our analysis to popular models of parametric elliptic PDEs in uncertainty quantification. In our numerical experiments, we restrict the network parameters during training by adding tailored regularization terms, and we show that for an algebraic equation mimicking the parametric PDE problems the DNNs trained with tailored regularization perform significantly better.
. Keywordsquasi–Monte Carlo methodfinite element methodwave propagationMSC codes35J0535R6065D3065D3265N30 . Keywordsquasi–Monte Carlo methodfinite element methodwave propagationMSC codes35J0535R6065D3065D3265N30
Abstract We study the application of a tailored quasi-Monte Carlo (QMC) method to a class of optimal control problems subject to parabolic partial differential equation (PDE) constraints under uncertainty: the … Abstract We study the application of a tailored quasi-Monte Carlo (QMC) method to a class of optimal control problems subject to parabolic partial differential equation (PDE) constraints under uncertainty: the state in our setting is the solution of a parabolic PDE with a random thermal diffusion coefficient, steered by a control function. To account for the presence of uncertainty in the optimal control problem, the objective function is composed with a risk measure. We focus on two risk measures, both involving high-dimensional integrals over the stochastic variables: the expected value and the (nonlinear) entropic risk measure. The high-dimensional integrals are computed numerically using specially designed QMC methods and, under moderate assumptions on the input random field, the error rate is shown to be essentially linear, independently of the stochastic dimension of the problem—and thereby superior to ordinary Monte Carlo methods. Numerical results demonstrate the effectiveness of our method.
In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and … In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and density functions correspond to a quantity of interest involving the solution to an elliptic partial differential equation (PDE) with a lognormally distributed coefficient and a normally distributed source term. There is extensive previous work on using QMC to compute expected values in UQ, which have proven very successful in tackling a range of different PDE problems. However, the use of QMC for density estimation applied to UQ problems will be explored here for the first time. Density estimation presents a more difficult challenge compared to computing the expected value due to discontinuities present in the integral formulations of both the distribution and density. Our strategy is to use preintegration to eliminate the discontinuity by integrating out a carefully selected random parameter, so that QMC can be used to approximate the remaining integral. First, we establish regularity results for the PDE quantity of interest that are required for smoothing by preintegration to be effective. We then show that an $N$-point lattice rule can be constructed for the integrands corresponding to the distribution and density, such that after preintegration the QMC error is of order $\mathcal{O}(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. This is the same rate achieved for computing the expected value of the quantity of interest. Numerical results are presented to reaffirm our theory.
Abstract We consider uncertainty quantification for the Poisson problem subject to domain uncertainty. For the stochastic parameterization of the random domain, we use the model recently introduced by Kaarnioja et … Abstract We consider uncertainty quantification for the Poisson problem subject to domain uncertainty. For the stochastic parameterization of the random domain, we use the model recently introduced by Kaarnioja et al. (SIAM J. Numer. Anal., 2020) in which a countably infinite number of independent random variables enter the random field as periodic functions. We develop lattice quasi-Monte Carlo (QMC) cubature rules for computing the expected value of the solution to the Poisson problem subject to domain uncertainty. These QMC rules can be shown to exhibit higher order cubature convergence rates permitted by the periodic setting independently of the stochastic dimension of the problem. In addition, we present a complete error analysis for the problem by taking into account the approximation errors incurred by truncating the input random field to a finite number of terms and discretizing the spatial domain using finite elements. The paper concludes with numerical experiments demonstrating the theoretical error estimates.
Abstract The Brownian bridge or Lévy–Ciesielski construction of Brownian paths almost surely converges uniformly to the true Brownian path. We focus on the uniform error. In particular, we show constructively … Abstract The Brownian bridge or Lévy–Ciesielski construction of Brownian paths almost surely converges uniformly to the true Brownian path. We focus on the uniform error. In particular, we show constructively that at level N , at which there are $d=2^N$ points evaluated on the Brownian path, the uniform error and its square, and the uniform error of geometric Brownian motion, have upper bounds of order $\mathcal {O}(\sqrt {\ln d/d})$ , matching the known orders. We apply the results to an option pricing example.
.In this paper, we analyze a method for approximating the distribution function and density of a random variable that depends in a nontrivial way on a possibly high number of … .In this paper, we analyze a method for approximating the distribution function and density of a random variable that depends in a nontrivial way on a possibly high number of independent random variables, each with support on the whole real line. Starting with the integral formulations of the distribution and density, the method involves smoothing the original integrand by preintegration with respect to one suitably chosen variable, and then applying a suitable quasi–Monte Carlo method to compute the integral of the resulting smoother function. Interpolation is then used to reconstruct the distribution or density on an interval. The preintegration technique is a special case of conditional sampling, a method that has previously been applied to a wide range of problems in statistics and computational finance. In particular, the pointwise approximation studied in this work is a specific case of the conditional density estimator previously considered by L'Ecuyer, Puchhammer, and Ben Abdellah INFORMS J. Comput., 34 (2022), pp. 1729–1748]. Our theory provides a rigorous regularity analysis of the preintegrated function, which is then used to show that the errors of the pointwise and interpolated estimators can both achieve nearly first-order convergence. Numerical results support the theory.Keywordsquasi–Monte Carlo methodsdensity estimationpreintegrationconditional samplingMSC codes65D3065D3262G07
We describe a fast method for solving elliptic partial differential equations (PDEs) with uncertain coefficients using kernel interpolation at a lattice point set. By representing the input random field of … We describe a fast method for solving elliptic partial differential equations (PDEs) with uncertain coefficients using kernel interpolation at a lattice point set. By representing the input random field of the system using the model proposed by Kaarnioja, Kuo, and Sloan (SIAM J.~Numer.~Anal.~2020), in which a countable number of independent random variables enter the random field as periodic functions, it was shown by Kaarnioja, Kazashi, Kuo, Nobile, and Sloan (Numer.~Math.~2022) that the lattice-based kernel interpolant can be constructed for the PDE solution as a function of the stochastic variables in a highly efficient manner using fast Fourier transform (FFT). In this work, we discuss the connection between our model and the popular ``affine and uniform model'' studied widely in the literature of uncertainty quantification for PDEs with uncertain coefficients. We also propose a new class of weights entering the construction of the kernel interpolant -- \emph{serendipitous weights} -- which dramatically improve the computational performance of the kernel interpolant for PDE problems with uncertain coefficients, and allow us to tackle function approximation problems up to very high dimensionalities. Numerical experiments are presented to showcase the performance of the serendipitous weights.
The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this … The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this paper, we compare two search criteria to construct lattice point sets for use in lattice-based kernel approximation. The first candidate, $\calP_n^*$, is based on the power function that appears in machine learning literature. The second, $\calS_n^*$, is a search criterion used for generating lattices for approximation using truncated Fourier series. We find that the empirical difference in error between the lattices constructed using $\calP_n^*$ and $\calS_n^*$ is marginal. The criterion $\calS_n^*$ is preferred as it is computationally more efficient and has a proven error bound.
We show that a very simple randomised algorithm for numerical integration can produce a near optimal rate of convergence for integrals of functions in the $d$-dimensional weighted Korobov space. This … We show that a very simple randomised algorithm for numerical integration can produce a near optimal rate of convergence for integrals of functions in the $d$-dimensional weighted Korobov space. This algorithm uses a lattice rule with a fixed generating vector and the only random element is the choice of the number of function evaluations. For a given computational budget $n$ of a maximum allowed number of function evaluations, we uniformly pick a prime $p$ in the range $n/2 < p \le n$. We show error bounds for the randomised error, which is defined as the worst case expected error, of the form $O(n^{-\alpha - 1/2 + \delta})$, with $\delta > 0$, for a Korobov space with smoothness $\alpha > 1/2$ and general weights. The implied constant in the bound is dimension-independent given the usual conditions on the weights. We present an algorithm that can construct suitable generating vectors \emph{offline} ahead of time at cost $O(d n^4 / \ln n)$ when the weight parameters defining the Korobov spaces are so-called product weights. For this case, numerical experiments confirm our theory that the new randomised algorithm achieves the near optimal rate of the randomised error.
We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the … We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the group structure within lattice cubature rules for numerical integration, we show how to approach these problems efficiently by means of Fast Fourier Transform techniques. In particular, we consider applications to the quantum mechanical rotor and compact $U(1)$ lattice gauge theory, where the physical dimensions are two and three. This proceedings article reviews our results presented in J. Comput. Phys 443 (2021) 110527.
We study the application of a tailored quasi-Monte Carlo (QMC) method to a class of optimal control problems subject to parabolic partial differential equation (PDE) constraints under uncertainty: the state … We study the application of a tailored quasi-Monte Carlo (QMC) method to a class of optimal control problems subject to parabolic partial differential equation (PDE) constraints under uncertainty: the state in our setting is the solution of a parabolic PDE with a random thermal diffusion coefficient, steered by a control function. To account for the presence of uncertainty in the optimal control problem, the objective function is composed with a risk measure. We focus on two risk measures, both involving high-dimensional integrals over the stochastic variables: the expected value and the (nonlinear) entropic risk measure. The high-dimensional integrals are computed numerically using specially designed QMC methods and, under moderate assumptions on the input random field, the error rate is shown to be essentially linear, independently of the stochastic dimension of the problem -- and thereby superior to ordinary Monte Carlo methods. Numerical results demonstrate the effectiveness of our method.
We approximate $d$-variate periodic functions in weighted Korobov spaces with general weight parameters using $n$ function values at lattice points. We do not limit $n$ to be a prime number, … We approximate $d$-variate periodic functions in weighted Korobov spaces with general weight parameters using $n$ function values at lattice points. We do not limit $n$ to be a prime number, as in currently available literature, but allow any number of points, including powers of $2$, thus providing the fundamental theory for construction of embedded lattice sequences. Our results are constructive in that we provide a component-by-component algorithm which constructs a suitable generating vector for a given number of points or even a range of numbers of points. It does so without needing to construct the index set on which the functions will be represented. The resulting generating vector can then be used to approximate functions in the underlying weighted Korobov space. We analyse the approximation error in the worst-case setting under both the $L_2$ and $L_{\infty}$ norms. Our component-by-component construction under the $L_2$ norm achieves the best possible rate of convergence for lattice-based algorithms, and the theory can be applied to lattice-based kernel methods and splines. Depending on the value of the smoothness parameter $\alpha$, we propose two variants of the search criterion in the construction under the $L_{\infty}$ norm, extending previous results which hold only for product-type weight parameters and prime $n$. We also provide a theoretical upper bound showing that embedded lattice sequences are essentially as good as lattice rules with a fixed value of $n$. Under some standard assumptions on the weight parameters, the worst-case error bound is independent of $d$.
We consider uncertainty quantification for the Poisson problem subject to domain uncertainty. For the stochastic parameterization of the random domain, we use the model recently introduced by Kaarnioja, Kuo, and … We consider uncertainty quantification for the Poisson problem subject to domain uncertainty. For the stochastic parameterization of the random domain, we use the model recently introduced by Kaarnioja, Kuo, and Sloan (SIAM J. Numer. Anal., 2020) in which a countably infinite number of independent random variables enter the random field as periodic functions. We develop lattice quasi-Monte Carlo (QMC) cubature rules for computing the expected value of the solution to the Poisson problem subject to domain uncertainty. These QMC rules can be shown to exhibit higher order cubature convergence rates permitted by the periodic setting independently of the stochastic dimension of the problem. In addition, we present a complete error analysis for the problem by taking into account the approximation errors incurred by truncating the input random field to a finite number of terms and discretizing the spatial domain using finite elements. The paper concludes with numerical experiments demonstrating the theoretical error estimates.
In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The … In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The method involves preintegration with respect to one well chosen integration variable to obtain a smooth function of the remaining variables, followed by the application of a tailored lattice Quasi-Monte Carlo rule to integrate over the remaining variables.
We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the … We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the group structure within lattice cubature rules for numerical integration, we show how to approach these problems efficiently by means of Fast Fourier Transform techniques. In particular, we consider applications to the quantum mechanical rotor and compact U(1) lattice gauge theory, where the physical dimensions are two and three. This proceedings article reviews our results presented in J. Comput. Phys 443 (2021) 110527.
We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck … We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"><mml:semantics><mml:msup><mml:mrow class="MJX-TeXAtom-ORD"><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msup><mml:annotation encoding="application/x-tex">\mathbb {R}^d</mml:annotation></mml:semantics></mml:math></inline-formula>, for<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d greater-than-or-equal-to 1"><mml:semantics><mml:mrow><mml:mi>d</mml:mi><mml:mo>≥</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:annotation encoding="application/x-tex">d \geq 1</mml:annotation></mml:semantics></mml:math></inline-formula>. Both spaces are Hilbert spaces involving<italic>weight functions</italic>, which determine the behaviour as different variables tend to<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="plus-or-minus normal infinity"><mml:semantics><mml:mrow><mml:mo>±</mml:mo><mml:mi mathvariant="normal">∞</mml:mi></mml:mrow><mml:annotation encoding="application/x-tex">\pm \infty</mml:annotation></mml:semantics></mml:math></inline-formula>, and<italic>weight parameters</italic>, which represent the influence of different subsets of variables. The unanchored ANOVA space on<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"><mml:semantics><mml:msup><mml:mrow class="MJX-TeXAtom-ORD"><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msup><mml:annotation encoding="application/x-tex">\mathbb {R}^d</mml:annotation></mml:semantics></mml:math></inline-formula>was initially introduced by Nichols and Kuo in 2014 to analyse the error of<italic>quasi-Monte Carlo</italic>(QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo and Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with<italic>kinks</italic>or<italic>jumps</italic>coming from option pricing problems. In this same setting, Griewank, Kuo, Leövey and Sloan in 2018 subsequently extended these ideas by developing a practical<italic>smoothing by preintegration</italic>technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper N"><mml:semantics><mml:mi>N</mml:mi><mml:annotation encoding="application/x-tex">N</mml:annotation></mml:semantics></mml:math></inline-formula>points converges at a rate close to<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="1 slash upper N"><mml:semantics><mml:mrow><mml:mn>1</mml:mn><mml:mrow class="MJX-TeXAtom-ORD"><mml:mo>/</mml:mo></mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:annotation encoding="application/x-tex">1/N</mml:annotation></mml:semantics></mml:math></inline-formula>.
Abstract This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice—a setting that, as pointed out by Zeng et … Abstract This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice—a setting that, as pointed out by Zeng et al. (Monte Carlo and Quasi-Monte Carlo Methods 2004, Springer, New York, 2006) and Zeng et al. (Constr. Approx. 30: 529–555, 2009), allows fast evaluation by fast Fourier transform, so avoiding the need for a linear solver. The main contribution of the paper is the application to the approximation problem for uncertainty quantification of elliptic partial differential equations, with the diffusion coefficient given by a random field that is periodic in the stochastic variables, in the model proposed recently by Kaarnioja et al. (SIAM J Numer Anal 58(2): 1068–1091, 2020). The paper gives a full error analysis, and full details of the construction of lattices needed to ensure a good (but inevitably not optimal) rate of convergence and an error bound independent of dimension. Numerical experiments support the theory.
We consider rank-$1$ lattices for integration and reconstruction of functions with series expansion supported on a finite index set. We explore the connection between the periodic Fourier space and the … We consider rank-$1$ lattices for integration and reconstruction of functions with series expansion supported on a finite index set. We explore the connection between the periodic Fourier space and the non-periodic cosine space and Chebyshev space, via tent transform and then cosine transform, to transfer known results from the periodic setting into new insights for the non-periodic settings. Fast discrete cosine transform can be applied for the reconstruction phase. To reduce the size of the auxiliary index set in the associated component-by-component (CBC) construction for the lattice generating vectors, we work with a bi-orthonormal set of basis functions, leading to three methods for function reconstruction in the non-periodic settings. We provide new theory and efficient algorithmic strategies for the CBC construction. We also interpret our results in the context of general function approximation and discrete least-squares approximation.
We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $\mathbb{R}^d$, for $d … We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $\mathbb{R}^d$, for $d \geq 1$. Both spaces are Hilbert spaces involving weight functions, which determine the behaviour as different variables tend to $\pm \infty$, and weight parameters, which represent the influence of different subsets of variables. The unanchored ANOVA space on $\mathbb{R}^d$ was initially introduced by Nichols & Kuo in 2014 to analyse the error of quasi-Monte Carlo (QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo & Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with kinks or jumps coming from option pricing problems. In this same setting, Griewank, Kuo, Leovey & Sloan in 2018 subsequently extended these ideas by developing a practical smoothing by preintegration technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using $N$ points converges at a rate close to $1/N$.
We propose and analyze a quasi-Monte Carlo (QMC) algorithm for efficient simulation of wave propagation modeled by the Helmholtz equation in a bounded region in which the refractive index is … We propose and analyze a quasi-Monte Carlo (QMC) algorithm for efficient simulation of wave propagation modeled by the Helmholtz equation in a bounded region in which the refractive index is random and spatially heterogenous. Our focus is on the case in which the region can contain multiple wavelengths. We bypass the usual sign-indefiniteness of the Helmholtz problem by switching to an alternative sign-definite formulation recently developed by Ganesh and Morgenstern Numer. Algorithms, 83 (2020), pp. 1441--1487. The price to pay is that the regularity analysis required for QMC methods becomes much more technical. Nevertheless we obtain a complete analysis with error comprising stochastic dimension truncation error, finite element error, and cubature error, with results comparable to those obtained for the diffusion problem.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 22 October 2019Accepted: 03 February 2021Published online: 08 April 2021Keywordsoptimal control, uncertainty quantification, … Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 22 October 2019Accepted: 03 February 2021Published online: 08 April 2021Keywordsoptimal control, uncertainty quantification, quasi-Monte Carlo method, PDE-constrained optimization with uncertain coefficients, optimization under uncertaintyAMS Subject Headings49J20, 65D30, 65D32Publication DataISSN (online): 2166-2525Publisher: Society for Industrial and Applied MathematicsCODEN: sjuqa3
We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the … We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the group structure within lattice cubature rules for numerical integration, we show how to approach these problems efficiently by means of Fast Fourier Transform techniques. In particular, we consider applications to the quantum mechanical rotor and compact U(1) lattice gauge theory, where the physical dimensions are two and three. This proceedings article reviews our results presented in J. Comput. Phys 443 (2021) 110527.
Preintegration is a technique for high-dimensional integration over $d$-dimensional Euclidean space, which is designed to reduce an integral whose integrand contains kinks or jumps to a $(d-1)$-dimensional integral of a … Preintegration is a technique for high-dimensional integration over $d$-dimensional Euclidean space, which is designed to reduce an integral whose integrand contains kinks or jumps to a $(d-1)$-dimensional integral of a smooth function. The resulting smoothness allows efficient evaluation of the $(d-1)$-dimensional integral by a Quasi-Monte Carlo or Sparse Grid method. The technique is similar to conditional sampling in statistical contexts, but the intention is different: in conditional sampling the aim is to reduce the variance, rather than to achieve smoothness. Preintegration involves an initial integration with respect to one well chosen real-valued variable. Griebel, Kuo, Sloan [Math. Comp. 82 (2013), 383--400] and Griewank, Kuo, Le\"ovey, Sloan [J. Comput. Appl. Maths. 344 (2018), 259--274] showed that the resulting $(d-1)$-dimensional integrand is indeed smooth under appropriate conditions, including a key assumption -- the integrand of the smooth function underlying the kink or jump is strictly monotone with respect to the chosen special variable when all other variables are held fixed. The question addressed in this paper is whether this monotonicity property with respect to one well chosen variable is necessary. We show here that the answer is essentially yes, in the sense that without this property the resulting $(d-1)$-dimensional integrand is generally not smooth, having square-root or other singularities.
We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $\mathbb{R}^d$, for $d … We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $\mathbb{R}^d$, for $d \geq 1$. Both spaces are Hilbert spaces involving weight functions, which determine the behaviour as different variables tend to $\pm \infty$, and weight parameters, which represent the influence of different subsets of variables. The unanchored ANOVA space on $\mathbb{R}^d$ was initially introduced by Nichols & Kuo in 2014 to analyse the error of quasi-Monte Carlo (QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo & Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with kinks or jumps coming from option pricing problems. In this same setting, Griewank, Kuo, Le\"ovey & Sloan in 2018 subsequently extended these ideas by developing a practical smoothing by preintegration technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using $N$ points converges at a rate close to $1/N$.
In this paper, we analyse a method for approximating the distribution function and density of a random variable that depends in a non-trivial way on a possibly high number of … In this paper, we analyse a method for approximating the distribution function and density of a random variable that depends in a non-trivial way on a possibly high number of independent random variables, each with support on the whole real line. Starting with the integral formulations of the distribution and density, the method involves smoothing the original integrand by preintegration with respect to one suitably chosen variable, and then applying a suitable quasi-Monte Carlo (QMC) method to compute the integral of the resulting smoother function. Interpolation is then used to reconstruct the distribution or density on an interval. The preintegration technique is a special case of conditional sampling, a method that has previously been applied to a wide range of problems in statistics and computational finance. In particular, the pointwise approximation studied in this work is a specific case of the conditional density estimator previously considered in L'Ecuyer et al., arXiv:1906.04607. Our theory provides a rigorous regularity analysis of the preintegrated function, which is then used to show that the errors of the pointwise and interpolated estimators can both achieve nearly first-order convergence. Numerical results support the theory.
In a recent paper by the same authors, we provided a theoretical foundation for the component-by-component (CBC) construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, … In a recent paper by the same authors, we provided a theoretical foundation for the component-by-component (CBC) construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight parameters. The construction led to an error bound that achieves the best possible rate of convergence for lattice algorithms. Previously available literature covered only weights of a simple form commonly known as
This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice---a setting that, as pointed out by Zeng, Leung, Hickernell … This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice---a setting that, as pointed out by Zeng, Leung, Hickernell (MCQMC2004, 2006) and Zeng, Kritzer, Hickernell (Constr. Approx., 2009), allows fast evaluation by fast Fourier transform, so avoiding the need for a linear solver. The main contribution of the paper is the application to the approximation problem for uncertainty quantification of elliptic partial differential equations, with the diffusion coefficient given by a random field that is periodic in the stochastic variables, in the model proposed recently by Kaarnioja, Kuo, Sloan (SIAM J. Numer. Anal., 2020). The paper gives a full error analysis, and full details of the construction of lattices needed to ensure a good (but inevitably not optimal) rate of convergence and an error bound independent of dimension. Numerical experiments support the theory.
Many studies in uncertainty quantification have been carried out under the assumption of an input random field in which a countable number of independent random variables are each uniformly distributed … Many studies in uncertainty quantification have been carried out under the assumption of an input random field in which a countable number of independent random variables are each uniformly distributed on an interval, with these random variables entering linearly in the input random field (the so-called affine model). In this paper we consider an alternative model of the random field, in which the random variables have the same uniform distribution on an interval, but the random variables enter the input field as periodic functions. The field is constructed in such a way as to have the same mean and covariance function as the affine random field. Higher moments differ from the affine case, but in general the periodic model seems no less desirable. The new model of the random field is used to compute expected values of a quantity of interest arising from an elliptic PDE with random coefficients. The periodicity is shown to yield a higher order cubature convergence rate of $\mathcal{O}(n^{-1/p})$ independently of the dimension when used in conjunction with rank-1 lattice cubature rules constructed using suitably chosen smoothness-driven product and order dependent weights, where $n$ is the number of lattice points and $p$ is the summability exponent of the fluctuations in the series expansion of the random coefficient. We present numerical examples that assess the performance of our method.
We propose and analyze a quasi-Monte Carlo (QMC) algorithm for efficient simulation of wave propagation modeled by the Helmholtz equation in a bounded region in which the refractive index is … We propose and analyze a quasi-Monte Carlo (QMC) algorithm for efficient simulation of wave propagation modeled by the Helmholtz equation in a bounded region in which the refractive index is random and spatially heterogenous. Our focus is on the case in which the region can contain multiple wavelengths. We bypass the usual sign-indefiniteness of the Helmholtz problem by switching to an alternative sign-definite formulation recently developed by Ganesh and Morgenstern (Numerical Algorithms, 83, 1441-1487, 2020). The price to pay is that the regularity analysis required for QMC methods becomes much more technical. Nevertheless we obtain a complete analysis with error comprising stochastic dimension truncation error, finite element error and cubature error, with results comparable to those obtained for the diffusion problem.
This paper provides the theoretical foundation for the construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight … This paper provides the theoretical foundation for the construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight parameters. Our construction leads to an error bound that achieves the best possible rate of convergence for lattice algorithms. This work is motivated by PDE applications in which bounds on the norm of the functions to be approximated require special forms of weight parameters (so-called POD weights or SPOD weights), as opposed to the simple product weights covered by the existing literature. Our result can be applied to other lattice-based approximation algorithms, including kernel methods or splines.
We seek shifted lattice rules that are good for high dimensional integration over the unit cube in the setting of an unanchored weighted Sobolev space of functions with square-integrable mixed … We seek shifted lattice rules that are good for high dimensional integration over the unit cube in the setting of an unanchored weighted Sobolev space of functions with square-integrable mixed first derivatives. Many existing studies rely on random shifting of the lattice, whereas here we work with lattice rules with a deterministic shift. Specifically, we consider 'half-shifted' rules in which each component of the shift is an odd multiple of \(1/(2N)\) where \(N\) is the number of points in the lattice. By applying the principle that there is always at least one choice as good as the average, we show that for a given generating vector there exists a half-shifted rule whose squared worst-case error differs from the shift-averaged squared worst-case error by a term of only order \({1/N^2}\). We carry out numerical experiments where the generating vector is chosen component-by-component (CBC), as for randomly shifted lattices, and where the shift is chosen by a new `CBC for shift' algorithm. The numerical results are encouraging. &#x0D; &#x0D; References J. Dick, F. Y. Kuo, and I. H. Sloan. High-dimensional integration: The quasi-Monte Carlo way. Acta Numer., 22:133–288, 2013. doi:10.1017/S0962492913000044. J. Dick, D. Nuyens, and F. Pillichshammer. Lattice rules for nonperiodic smooth integrands. Numer. Math., 126(2):259–291, 2014. doi:10.1007/s00211-013-0566-0. T. Goda, K. Suzuki, and T. Yoshiki. Lattice rules in non-periodic subspaces of sobolev spaces. Numer. Math., 141(2):399–427, 2019. doi:10.1007/s00211-018-1003-1. F. Y. Kuo. Lattice rule generating vectors. URL http://web.maths.unsw.edu.au/ fkuo/lattice/index.html. D. Nuyens and R. Cools. Fast algorithms for component-by-component construction of rank-1 lattice rules in shift-invariant reproducing kernel Hilbert spaces. Math. Comput., 75:903–920, 2006. doi:10.1090/S0025-5718-06-01785-6. I. H. Sloan and S. Joe. Lattice methods for multiple integration. Oxford Science Publications. Clarendon Press and Oxford University Press, 1994. URL https://global.oup.com/academic/product/lattice-methods-for-multiple-integration-9780198534723. I. H. Sloan and H. Wozniakowski. When are quasi-Monte Carlo algorithms efficient for high dimensional integrals? J. Complex., 14(1):1–33, 1998. doi:10.1006/jcom.1997.0463. I. H. Sloan, F. Y. Kuo, and S. Joe. On the step-by-step construction of quasi-Monte Carlo integration rules that achieve strong tractability error bounds in weighted Sobolev spaces. Math. Comput., 71:1609–1641, 2002. doi:10.1090/S0025-5718-02-01420-5.
We study an optimal control problem under uncertainty, where the target function is the solution of an elliptic partial differential equation with random coefficients, steered by a control function. The … We study an optimal control problem under uncertainty, where the target function is the solution of an elliptic partial differential equation with random coefficients, steered by a control function. The robust formulation of the optimization problem is stated as a high-dimensional integration problem over the stochastic variables. It is well known that carrying out a high-dimensional numerical integration of this kind using a Monte Carlo method has a notoriously slow convergence rate; meanwhile, a faster rate of convergence can potentially be obtained by using sparse grid quadratures, but these lead to discretized systems that are non-convex due to the involvement of negative quadrature weights. In this paper, we analyze instead the application of a quasi-Monte Carlo method, which retains the desirable convexity structure of the system and has a faster convergence rate compared to ordinary Monte Carlo methods. In particular, we show that under moderate assumptions on the decay of the input random field, the error rate obtained by using a specially designed, randomly shifted rank-1 lattice quadrature rule is essentially inversely proportional to the number of quadrature nodes. The overall discretization error of the problem, consisting of the dimension truncation error, finite element discretization error and quasi-Monte Carlo quadrature error, is derived in detail. We assess the theoretical findings in numerical experiments.
In a recent paper by the same authors, we provided a theoretical foundation for the component-by-component (CBC) construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, … In a recent paper by the same authors, we provided a theoretical foundation for the component-by-component (CBC) construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight parameters. The construction led to an error bound that achieves the best possible rate of convergence for lattice algorithms. Previously available literature covered only weights of a simple form commonly known as product weights. In this paper we address the computational aspect of the construction. We develop fast CBC construction of lattice algorithms for special forms of weight parameters, including the so-called POD weights and SPOD weights which arise from PDE applications, making the lattice algorithms truly applicable in practice. With $d$ denoting the dimension and $n$ the number of lattice points, we show that the construction cost is $\mathcal{O}(d\,n\log(n) + d^2\log(d)\,n)$ for POD weights, and $\mathcal{O}(d\,n\log(n) + d^3\sigma^2\,n)$ for SPOD weights of degree $\sigma\ge 2$. The resulting lattice generating vectors can be used in other lattice-based approximation algorithms, including kernel methods or splines.
This paper provides the theoretical foundation for the construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight … This paper provides the theoretical foundation for the construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight parameters. Our construction leads to an error bound that achieves the best possible rate of convergence for lattice algorithms. This work is motivated by PDE applications in which bounds on the norm of the functions to be approximated require special forms of weight parameters (so-called POD weights or SPOD weights), as opposed to the simple product weights covered by the existing literature. Our result can be applied to other lattice-based approximation algorithms, including kernel methods or splines.
We consider rank-1 lattices for integration and reconstruction of functions with series expansion supported on a finite index set. We explore the connection between the periodic Fourier space and the … We consider rank-1 lattices for integration and reconstruction of functions with series expansion supported on a finite index set. We explore the connection between the periodic Fourier space and the non-periodic cosine space and Chebyshev space, via tent transform and then cosine transform, to transfer known results from the periodic setting into new insights for the non-periodic settings. Fast discrete cosine transform can be applied for the reconstruction phase. To reduce the size of the auxiliary index set in the associated component-by-component (CBC) construction for the lattice generating vectors, we work with a bi-orthonormal set of basis functions, leading to three methods for function reconstruction in the non-periodic settings. We provide new theory and efficient algorithmic strategies for the CBC construction. We also interpret our results in the context of general function approximation and discrete least-squares approximation.
Many studies in uncertainty quantification have been carried out under the assumption of an input random field in which a countable number of independent random variables are each uniformly distributed … Many studies in uncertainty quantification have been carried out under the assumption of an input random field in which a countable number of independent random variables are each uniformly distributed on an interval, with these random variables entering linearly in the input random field (the so-called affine model). In this paper we consider an alternative model of the random field, in which the random variables have the same uniform distribution on an interval, but the random variables enter the input field as periodic functions. The field is constructed in such a way as to have the same mean and covariance function as the affine random field. Higher moments differ from the affine case, but in general the periodic model seems no less desirable. The new model of the random field is used to compute expected values of a quantity of interest arising from an elliptic PDE with random coefficients. The periodicity is shown to yield a higher order cubature convergence rate of $\mathcal{O}(n^{-1/p})$ independently of the dimension when used in conjunction with rank-1 lattice cubature rules constructed using suitably chosen smoothness-driven product and order dependent weights, where $n$ is the number of lattice points and $p$ is the summability exponent of the fluctuations in the series expansion of the random coefficient. We present numerical examples that assess the performance of our method.
We seek shifted lattice rules that are good for high dimensional integration over the unit cube in the setting of an unanchored weighted Sobolev space of functions with square-integrable mixed … We seek shifted lattice rules that are good for high dimensional integration over the unit cube in the setting of an unanchored weighted Sobolev space of functions with square-integrable mixed first derivatives. Many existing studies rely on random shifting of the lattice, whereas here we work with lattice rules with a deterministic shift. Specifically, we consider rules, in which each component of the is an odd multiple of $1/(2N)$, where $N$ is the number of points in the lattice. We show, by applying the principle that \emph{there is always at least one choice as good as the average}, that for a given generating vector there exists a half-shifted rule whose squared worst-case error differs from the shift-averaged squared worst-case error by a term of order only ${1/N^2}$. Numerical experiments, in which the generating vector is chosen component-by-component (CBC) as for randomly shifted lattices and then the by a new CBC for shift algorithm, yield encouraging results.
Abstract This is the first book devoted to lattice methods, a recently developed way of calculating multiple integrals in many variables. Multiple integrals of this kind arise in fields such … Abstract This is the first book devoted to lattice methods, a recently developed way of calculating multiple integrals in many variables. Multiple integrals of this kind arise in fields such as quantum physics and chemistry, statistical mechanics, Bayesian statistics and many others. Lattice methods are an effective tool when the number of integrals are large. The book begins with a review of existing methods before presenting lattice theory in a thorough, self-contained manner, with numerous illustrations and examples. Group and number theory are included, but the treatment is such that no prior knowledge is needed. Not only the theory but the practical implementation of lattice methods is covered. An algorithm is presented alongside tables not available elsewhere, which together allow the practical evaluation of multiple integrals in many variables. Most importantly, the algorithm produces an error estimate in avery efficent manner. the book also provides a fast track for readers wanting to move rapidly to using methods in practical calculations. It concludes with extensive numerical test which compare lattice methods with other methods, such as the Monte Carlo.
Recently, quasi-Monte Carlo algorithms have been successfully used for multivariate integration of high dimensiond, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case … Recently, quasi-Monte Carlo algorithms have been successfully used for multivariate integration of high dimensiond, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasi-Monte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasi-Monte Carlo algorithms can work well for arbitrarily larged. It is done by identifying classes of functions for which the effect of the dimensiondis negligible. These areweightedclasses in which the behavior in the successive dimensions is moderated by a sequence of weights. We prove that the minimalworst caseerror of quasi-Monte Carlo algorithms does not depend on the dimensiondiff the sum of the weights is finite. We also prove that the minimal number of function values in the worst case setting needed to reduce the initial error by ε is bounded byCε−p, where the exponentp∈ [1, 2], andCdepends exponentially on the sum of weights. Hence, the relatively small sum of the weights makes some quasi-Monte Carlo algorithms strongly tractable. We show in a nonconstructive way that many quasi-Monte Carlo algorithms are strongly tractable. Even random selection of sample points (done once for the whole weighted class of functions and then the worst case error is established for that particular selection, in contrast to Monte Carlo where random selection of sample points is carried out for a fixed function) leads to strong tractable quasi-Monte Carlo algorithms. In this case the minimal number of function values in theworst casesetting is of order ε−pwith the exponentp= 2. The deterministic construction of strongly tractable quasi-Monte Carlo algorithms as well as the minimal exponentpis open.
We reformulate the original component-by-component algorithm for rank-<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="1"> <mml:semantics> <mml:mn>1</mml:mn> <mml:annotation encoding="application/x-tex">1</mml:annotation> </mml:semantics> </mml:math> </inline-formula> lattices in a matrix-vector notation so as to highlight its structural … We reformulate the original component-by-component algorithm for rank-<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="1"> <mml:semantics> <mml:mn>1</mml:mn> <mml:annotation encoding="application/x-tex">1</mml:annotation> </mml:semantics> </mml:math> </inline-formula> lattices in a matrix-vector notation so as to highlight its structural properties. For function spaces similar to a weighted Korobov space, we derive a technique which has construction cost <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper O left-parenthesis s n log left-parenthesis n right-parenthesis right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mi>n</mml:mi> <mml:mi>log</mml:mi> <mml:mo>⁡<!-- ⁡ --></mml:mo> <mml:mo stretchy="false">(</mml:mo> <mml:mi>n</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">O(s n \log (n))</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, in contrast with the original algorithm which has construction cost <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper O left-parenthesis s n squared right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:msup> <mml:mi>n</mml:mi> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">O(s n^2)</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. Herein <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="s"> <mml:semantics> <mml:mi>s</mml:mi> <mml:annotation encoding="application/x-tex">s</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is the number of dimensions and <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="n"> <mml:semantics> <mml:mi>n</mml:mi> <mml:annotation encoding="application/x-tex">n</mml:annotation> </mml:semantics> </mml:math> </inline-formula> the number of points (taken prime). In contrast to other approaches to speed up construction, our fast algorithm computes exactly the same quantity as the original algorithm. The presented algorithm can also be used to construct randomly shifted lattice rules in weighted Sobolev spaces.
This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where … This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst-case error as s increases. A crucial role is played by parameters called ‘weights’, since a careful use of the weight parameters is needed to ensure that the worst-case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy, all of which are discussed with an appropriate level of detail.
Shifted rank-1 lattice rules, a special class of quasi-Monte Carlo methods, have recently been proposed by the present authors for the integration of functions belonging to certain "weighted" Sobolev spaces. … Shifted rank-1 lattice rules, a special class of quasi-Monte Carlo methods, have recently been proposed by the present authors for the integration of functions belonging to certain "weighted" Sobolev spaces. The shifts in these rules were generated in a deterministic manner. In contrast, in this paper we generate these shifts randomly. This allows probabilistic estimates for the error in a given integral. It also reduces the number of operations required to find the generating vectors for the underlying lattice rules component-by-component. The rules thus constructed achieve a worst-case strong tractability error bound in an average or probabilistic sense.
In this paper quasi-Monte Carlo (QMC) methods are applied to a class of elliptic partial differential equations (PDEs) with random coefficients, where the random coefficient is parametrized by a countably … In this paper quasi-Monte Carlo (QMC) methods are applied to a class of elliptic partial differential equations (PDEs) with random coefficients, where the random coefficient is parametrized by a countably infinite number of terms in a Karhunen--Loève expansion. Models of this kind appear frequently in numerical models of physical systems, and in uncertainty quantification. The method uses a QMC method to estimate expected values of linear functionals of the exact or approximate solution of the PDE, with the expected value considered as an infinite dimensional integral in the parameter space corresponding to the randomness induced by the random coefficient. The error analysis, arguably the first rigorous application of the modern theory of QMC in weighted spaces, exploits the regularity with respect to both the physical variables (the variables in the physical domain) and the parametric variables (the parameters corresponding to randomness). In the weighted-space theory of QMC methods, “weights,” describing the varying difficulty of different subsets of the variables, are introduced in order to make sure that the high-dimensional integration problem is tractable. It turns out that the weights arising from the present analysis are of a nonstandard kind, being of neither product nor order dependent form, but instead a hybrid of the two---we refer to these as “product and order dependent weights,” or “POD weights” for short. These POD weights are of a simple enough form to permit a component-by-component construction of a randomly shifted lattice rule that has optimal convergence properties for the given weighted space setting. If the terms in the expansion for the random coefficient have an appropriate decay property, and if we choose POD weights that minimize a certain upper bound on the error, then the solution of the PDE belongs to the joint function space needed for the analysis, and the QMC error (in the sense of a root-mean-square error averaged over shifts) is of order $\mathcal{O}(N^{-1+\delta})$ for arbitrary $\delta>0$, where $N$ denotes the number of sampling points in the parameter space. Moreover, this convergence rate and slower convergence rates under more relaxed conditions are achieved under conditions similar to those found recently by Cohen, De Vore, and Schwab [Found. Comput. Math., 10 (2010), pp. 615--646] for the same model by best $N$-term approximation. We analyze the impact of a finite element (FE) discretization on the overall efficiency of the scheme, in terms of accuracy versus overall cost, with results that are comparable to those of the best $N$-term approximation.
Preface 1. Monte Carlo methods and Quasi-Monte Carlo methods 2. Quasi-Monte Carlo methods for numerical integration 3. Low-discrepancy point sets and sequences 4. Nets and (t,s)-sequences 5. Lattice rules for … Preface 1. Monte Carlo methods and Quasi-Monte Carlo methods 2. Quasi-Monte Carlo methods for numerical integration 3. Low-discrepancy point sets and sequences 4. Nets and (t,s)-sequences 5. Lattice rules for numerical integration 6. Quasi- Monte Carlo methods for optimization 7. Random numbers and pseudorandom numbers 8. Nonlinear congruential pseudorandom numbers 9. Shift-Register pseudorandom numbers 10. Pseudorandom vector generation Appendix A. Finite fields and linear recurring sequences Appendix B. Continued fractions Bibliography Index.
Lattice rules are a family of equal‐weight cubature formulae for approximating high‐dimensional integrals. By now it is well established that good generating vectors for lattice rules having n points can … Lattice rules are a family of equal‐weight cubature formulae for approximating high‐dimensional integrals. By now it is well established that good generating vectors for lattice rules having n points can be constructed component‐by‐component for integrands belonging to certain weighted function spaces, and that they can achieve the optimal rate of convergence. Although the lattice rules constructed this way are extensible in dimension, they are not extensible in n; thus when n is changed the generating vector needs to be constructed anew. In this paper we introduce a new algorithm for constructing good generating vectors for embedded lattice rules which can be used for a range of n while still being extensible in dimension. By using an adaptation of the fast component‐by‐component construction algorithm (which makes use of fast Fourier transforms), we are able to obtain good generating vectors for thousands of dimensions and millions of points, under both product weight and order‐dependent weight settings, at the cost of $\calO(d n (\log (n))^2)$ operations. With a sufficiently large number of points and good overall quality, these embedded lattice rules can be used for practical purposes in the same way as a low‐discrepancy sequence. We show for a range of weight settings in an unanchored Sobolev space that our embedded lattice rules achieve the same (optimal) rate of convergence $\calO(n^{-1+\delta})$, $\delta>0$, as those constructed for a fixed number of points, and that the implied constant gains only a factor of 1.30 to 1.55.
This paper provides a novel approach to the construction of good lattice rules for the integration of Korobov classes of periodic functions over the unit $s$-dimensional cube. Theorems are proved … This paper provides a novel approach to the construction of good lattice rules for the integration of Korobov classes of periodic functions over the unit $s$-dimensional cube. Theorems are proved which justify the construction of good lattice rules one component at a time—that is, the lattice rule for dimension $s+1$ is obtained from the rule for dimension $s$ by searching over all possible choices of the $(s+1)$th component, while keeping all the existing components unchanged. The construction, which goes against accepted wisdom, is illustrated by numerical examples. The construction is particularly useful if the components of the integrand are ordered, in the sense that the first component is more important than the second, and so on.
Abstract This paper is a contemporary review of quasi-Monte Carlo (QMC) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s . … Abstract This paper is a contemporary review of quasi-Monte Carlo (QMC) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s . It first introduces the by-now standard setting of weighted Hilbert spaces of functions with square-integrable mixed first derivatives, and then indicates alternative settings, such as non-Hilbert spaces, that can sometimes be more suitable. Original contributions include the extension of the fast component-by-component (CBC) construction of lattice rules that achieve the optimal convergence order (a rate of almost 1/ N , where N is the number of points, independently of dimension) to so-called “product and order dependent” (POD) weights, as seen in some recent applications. Although the paper has a strong focus on lattice rules, the function space settings are applicable to all QMC methods. Furthermore, the error analysis and construction of lattice rules can be adapted to polynomial lattice rules from the family of digital nets.
We present formulas that allow us to decompose a function $f$ of $d$ variables into a sum of $2^d$ terms $f_{\mathbf {u}}$ indexed by subsets $\mathbf {u}$ of $\{1,\ldots ,d\}$, … We present formulas that allow us to decompose a function $f$ of $d$ variables into a sum of $2^d$ terms $f_{\mathbf {u}}$ indexed by subsets $\mathbf {u}$ of $\{1,\ldots ,d\}$, where each term $f_{\mathbf {u}}$ depends only on the variables with indices in $\mathbf {u}$. The decomposition depends on the choice of $d$ commuting projections $\{P_j\}_{j=1}^d$, where $P_j(f)$ does not depend on the variable $x_j$. We present an explicit formula for $f_{\mathbf {u}}$, which is new even for the
It has been shown by Hickernell and Niederreiter that there exist generating vectors for integration lattices which yield small integration errors for $n = p, p^2, \ldots$ for all integers … It has been shown by Hickernell and Niederreiter that there exist generating vectors for integration lattices which yield small integration errors for $n = p, p^2, \ldots$ for all integers $p \ge 2$. This paper provides algorithms for the construction of generating vectors which are finitely extensible for $n = p, p^2, \ldots$ for all integers $p \ge 2$. The proofs which show that our algorithms yield good extensible rank-1 lattices are based on a sieve principle. Particularly fast algorithms are obtained by using the fast component-by-component construction of Nuyens and Cools. Analogous results are presented for generating vectors with small weighted star discrepancy.
The quasi-Monte Carlo method for financial valuation and other integration problems has error bounds of size O((log N)k N-1), or even O((log N)k N-3/2), which suggests significantly better performance than … The quasi-Monte Carlo method for financial valuation and other integration problems has error bounds of size O((log N)k N-1), or even O((log N)k N-3/2), which suggests significantly better performance than the error size O(N-1/2) for standard Monte Carlo. But in high-dimensional problems, this benefit might not appear at feasible sample sizes. Substantial improvements from quasi-Monte Carlo integration have, however, been reported for problems such as the valuation of mortgage-backed securities, in dimensions as high as 360. The authors believe that this is due to a lower effective dimension of the integrand in those cases. This paper defines the effective dimension and shows in examples how the effective dimension may be reduced by using a Brownian bridge representation.
We construct quasi--Monte Carlo methods to approximate the expected values of linear functionals of Petrov--Galerkin discretizations of parametric operator equations which depend on a possibly infinite sequence of parameters. Such … We construct quasi--Monte Carlo methods to approximate the expected values of linear functionals of Petrov--Galerkin discretizations of parametric operator equations which depend on a possibly infinite sequence of parameters. Such problems arise in the numerical solution of differential and integral equations with random field inputs. We analyze the regularity of the solutions with respect to the parameters in terms of the rate of decay of the fluctuations of the input field. If $p\in (0,1]$ denotes the “summability exponent” corresponding to the fluctuations in affine-parametric families of operators, then we prove that deterministic “interlaced polynomial lattice rules” of order $\alpha = \lfloor 1/p \rfloor+1$ in $s$ dimensions with $N$ points can be constructed using a fast component-by-component algorithm, in $\mathcal{O}(\alpha\,s\, N\log N + \alpha^2\,s^2 N)$ operations, to achieve a convergence rate of $\mathcal{O}(N^{-1/p})$, with the implied constant independent of $s$. This dimension-independent convergence rate is superior to the rate $\mathcal{O}(N^{-1/p+1/2})$ for $2/3\leq p\leq 1$, which was recently established for randomly shifted lattice rules under comparable assumptions. In our analysis we use a nonstandard Banach space setting and introduce “smoothness-driven product and order dependent” weights for which we develop a new fast component-by-component construction.
We develop and justify an algorithm for the construction of quasi–Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank-1 lattice rules. The … We develop and justify an algorithm for the construction of quasi–Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank-1 lattice rules. The parameters characterising the shifted lattice rule are found "component-by-component": the ($d+1$)-th component of the generator vector and the shift are obtained by successive $1$-dimensional searches, with the previous $d$ components kept unchanged. The rules constructed in this way are shown to achieve a strong tractability error bound in weighted Sobolev spaces. A search for $n$-point rules with $n$ prime and all dimensions 1 to $d$ requires a total cost of $O(n^3d^2)$ operations. This may be reduced to $O(n^3d)$ operations at the expense of $O(n^2)$ storage. Numerical values of parameters and worst-case errors are given for dimensions up to 40 and $n$ up to a few thousand. The worst-case errors for these rules are found to be much smaller than the theoretical bounds.
We consider a finite element approximation of elliptic partial differential equations with random coefficients. Such equations arise, for example, in uncertainty quantification in subsurface flow modeling. Models for random coefficients … We consider a finite element approximation of elliptic partial differential equations with random coefficients. Such equations arise, for example, in uncertainty quantification in subsurface flow modeling. Models for random coefficients frequently used in these applications, such as log-normal random fields with exponential covariance, have only very limited spatial regularity and lead to variational problems that lack uniform coercivity and boundedness with respect to the random parameter. In our analysis we overcome these challenges by a careful treatment of the model problem almost surely in the random parameter, which then enables us to prove uniform bounds on the finite element error in standard Bochner spaces. These new bounds can then be used to perform a rigorous analysis of the multilevel Monte Carlo method for these elliptic problems that lack full regularity and uniform coercivity and boundedness. To conclude, we give some numerical results that confirm the new bounds.
Abstract In this paper I present a history of tractability of continuous problems, which has its beginning in the successful numerical tests for highdimensional integration of finance problems. Tractability results … Abstract In this paper I present a history of tractability of continuous problems, which has its beginning in the successful numerical tests for highdimensional integration of finance problems. Tractability results will be illustrated for two multivariate problems, integration and linear tensor products problems, in the worst case setting. My talk at FoCM'08 in Hong Kong and this paper are based on the book Tractability of Multivariate Problems , written jointly with Erich Novak. The first volume of our book has been recently published by the European Mathematical Society. Introduction Many people have recently become interested in studying the tractability of continuous problems. This area of research addresses the computational complexity of multivariate problems defined on spaces of functions of d variables, with d that can be in the hundreds or thousands; in fact, d can even be arbitrarily large. Such problems occur in numerous applications including physics, chemistry, finance, economics, and the computational sciences. As with all problems arising in information-based complexity, we want to solve multivariate problems to within ∈, using algorithms that use finitely many functions values or values of some linear functionals. Let n (∈, d ) be the minimal number of function values or linear functionals that is needed to compute the solution of the d -variate problem to within ∈. For many multivariate problems defined over standard spaces of functions n (∈, d ) is exponentially large in d .
This paper provides the theoretical foundation for the component-by-component (CBC) construction of randomly shifted lattice rules that are tailored to integrals over Rs arising from practical applications. For an integral … This paper provides the theoretical foundation for the component-by-component (CBC) construction of randomly shifted lattice rules that are tailored to integrals over Rs arising from practical applications. For an integral of the form ∫Rsf(y)∏j=1sϕ(yj)dy with a univariate probability density ϕ, our general strategy is to first map the integral into the unit cube [0,1]s using the inverse of the cumulative distribution function of ϕ, and then apply quasi-Monte Carlo (QMC) methods. However, the transformed integrand in the unit cube rarely falls within the standard QMC setting of Sobolev spaces of functions with mixed first derivatives. Therefore, a non-standard function space setting for integrands over Rs, previously considered by Kuo, Sloan, Wasilkowski and Waterhouse (2010), is required for the analysis. Motivated by the needs of three applications, the present paper extends the theory of the aforementioned paper in several non-trivial directions, including a new error analysis for the CBC construction of lattice rules with general non-product weights, the introduction of an unanchored variant for the setting, the use of coordinate-dependent weight functions in the norm, and the strategy for fast CBC construction with POD ("product and order dependent") weights.
This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our … This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. In particular, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings.
Geostatistical simulations often require the generation of numerous realizations of a stationary Gaussian process over a regularly meshed sample grid $\Omega$. This paper shows that for many important correlation functions … Geostatistical simulations often require the generation of numerous realizations of a stationary Gaussian process over a regularly meshed sample grid $\Omega$. This paper shows that for many important correlation functions in geostatistics, realizations of the associated process over $m+1$ equispaced points on a line can be produced at the cost of an initial FFT of length $2m$ with each new realization requiring an additional FFT of the same length. In particular, the paper first notes that if an $(m+1)\times(m+1) $ Toeplitz correlation matrix R can be embedded in a nonnegative definite $2M\times2M$ circulant matrix S, exact realizations of the normal multivariate $y \sim {\cal N}(0,R)$ can be generated via FFTs of length $2M$. Theoretical results are then presented to demonstrate that for many commonly used correlation structures the minimal embedding in which $M = m$ is nonnegative definite. Extensions to simulations of stationary fields in higher dimensions are also provided and illustrated.
We develop a convergence analysis of a multilevel algorithm combining higher order quasi--Monte Carlo (QMC) quadratures with general Petrov--Galerkin discretizations of countably affine parametric operator equations of elliptic and parabolic … We develop a convergence analysis of a multilevel algorithm combining higher order quasi--Monte Carlo (QMC) quadratures with general Petrov--Galerkin discretizations of countably affine parametric operator equations of elliptic and parabolic types, extending both the multilevel first order analysis in [F. Y. Kuo, Ch. Schwab, and I. H. Sloan, Found. Comput. Math., 15 (2015), pp. 411--449] and the single level higher order analysis in [J. Dick et al., SIAM J. Numer. Anal., 52 (2014), pp. 2676--2702]. We cover, in particular, both definite as well as indefinite strongly elliptic systems of partial differential equations (PDEs) in nonsmooth domains, and we discuss in detail the impact of higher order derivatives of Karhunen--Loève eigenfunctions in the parametrization of random PDE inputs on the convergence results. Based on our a priori error bounds, concrete choices of algorithm parameters are proposed in order to achieve a prescribed accuracy under minimal computational work. Problem classes and sufficient conditions on data are identified where multilevel higher order QMC Petrov--Galerkin algorithms outperform the corresponding single level versions of these algorithms. Numerical experiments confirm the theoretical results.
We show that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations. In … We show that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations. In the simplest case of a Lipschitz payoff and a Euler discretisation, the computational cost to achieve an accuracy of O(ϵ) is reduced from O(ϵ −3 ) to O(ϵ −2 (log ϵ) 2 ). The analysis is supported by numerical results showing significant computational savings.
This book serves well as an introduction into the more theoretical aspects of the use of spline models. It develops a theory and practice for the estimation of functions from … This book serves well as an introduction into the more theoretical aspects of the use of spline models. It develops a theory and practice for the estimation of functions from noisy data on functionals. The simplest example is the estimation of a smooth curve, given noisy observations on a finite number of its values. Convergence properties, data based smoothing parameter selection, confidence intervals, and numerical methods are established which are appropriate to a number of problems within this framework. Methods for including side conditions and other prior information in solving ill posed inverse problems are provided. Data which involves samples of random variables with Gaussian, Poisson, binomial, and other distributions are treated in a unified optimization context. Experimental design questions, i.e., which functionals should be observed, are studied in a general context. Extensions to distributed parameter system identification problems are made by considering implicitly defined functionals.
This is a note on Math. Comp. 82 (2013), 383–400. We first report a mistake, in that the main result Theorem 3.1, though correct, does not as claimed apply to … This is a note on Math. Comp. 82 (2013), 383–400. We first report a mistake, in that the main result Theorem 3.1, though correct, does not as claimed apply to the Asian option pricing problem. This is because assumption (3.3) in the theorem is not satisfied by the Asian option pricing problem. In this note we present a strengthened theorem, which removes that assumption. The new theorem is immediately applicable to the Asian option pricing problem with the standard and Brownian bridge constructions. Thus the option pricing conclusions of our original paper stand.
High-dimensional integrals are usually solved with Monte Carlo algorithms although theory suggests that low-discrepancy algorithms are sometimes superior. We report on numerical testing which compares low-discrepancy and Monte Carlo algorithms … High-dimensional integrals are usually solved with Monte Carlo algorithms although theory suggests that low-discrepancy algorithms are sometimes superior. We report on numerical testing which compares low-discrepancy and Monte Carlo algorithms on the evaluation of financial derivatives. The testing is performed on a Collateralized Mortgage Obligation (CMO) which is formulated as the computation of ten integrals of dimension up to 360. We tested two low-discrepancy algorithms (Sobol and Halton) and two randomized algorithms (classical Monte Carlo and Monte Carlo combined with antithetic variables). We conclude that for this CMO the Sobol algorithm is always superior to the other algorithms. We believe that it will be advantageous to use the Sobol algorithm for many other types of financial derivatives. Our conclusion regarding the superiority of the Sobol algorithm also holds when a rather small number of sample points are used, an important case in practice. We have built a software system called FINDER for computing high-dimensional integrals. FINDER runs on a heterogeneous network of workstations under PVM 3.2 (Parallel Virtual Machine). Since workstations are ubiquitous, this is a cost-effect way to do large computations fast. The measured speedup is at least .9N for $N$ workstations, $N$ less than or equal to 25. The software can also be used to compute high-dimensional integrals on a single workstation. A routine for generating Sobol points may be found, for example, in Numerical Recipes in C by Press et al. However, we incorporated major improvements in FINDER and we stress that the results reported in this paper were obtained using FINDER. One of the improvements was developing the table of primitive polynomials and initial direction numbers for dimensions up to 360.
We define a Walsh space which contains all functions whose partial mixed derivatives up to order $\delta \ge 1$ exist and have finite variation. In particular, for a suitable choice … We define a Walsh space which contains all functions whose partial mixed derivatives up to order $\delta \ge 1$ exist and have finite variation. In particular, for a suitable choice of parameters, this implies that certain Sobolev spaces are contained in these Walsh spaces. For this Walsh space we then show that quasi–Monte Carlo rules based on digital $(t,\alpha,s)$-sequences achieve the optimal rate of convergence of the worst-case error for numerical integration. This rate of convergence is also optimal for the subspace of smooth functions. Explicit constructions of digital $(t,\alpha,s)$-sequences are given, hence providing explicit quasi–Monte Carlo rules which achieve the optimal rate of convergence of the integration error for arbitrarily smooth functions.