Author Description

Login to generate an author description

Ask a Question About This Mathematician

In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The … In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The method involves preintegration with respect to one well chosen integration variable to obtain a smooth function of the remaining variables, followed by the application of a tailored lattice Quasi-Monte Carlo rule to integrate over the remaining variables.
The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this … The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this paper, we compare two search criteria to construct lattice point sets for use in lattice-based kernel approximation. The first candidate, $\calP_n^*$, is based on the power function that appears in machine learning literature. The second, $\calS_n^*$, is a search criterion used for generating lattices for approximation using truncated Fourier series. We find that the empirical difference in error between the lattices constructed using $\calP_n^*$ and $\calS_n^*$ is marginal. The criterion $\calS_n^*$ is preferred as it is computationally more efficient and has a proven error bound.
In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and … In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and density functions correspond to a quantity of interest involving the solution to an elliptic partial differential equation (PDE) with a lognormally distributed coefficient and a normally distributed source term. There is extensive previous work on using QMC to compute expected values in UQ, which have proven very successful in tackling a range of different PDE problems. However, the use of QMC for density estimation applied to UQ problems will be explored here for the first time. Density estimation presents a more difficult challenge compared to computing the expected value due to discontinuities present in the integral formulations of both the distribution and density. Our strategy is to use preintegration to eliminate the discontinuity by integrating out a carefully selected random parameter, so that QMC can be used to approximate the remaining integral. First, we establish regularity results for the PDE quantity of interest that are required for smoothing by preintegration to be effective. We then show that an $N$-point lattice rule can be constructed for the integrands corresponding to the distribution and density, such that after preintegration the QMC error is of order $\mathcal{O}(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. This is the same rate achieved for computing the expected value of the quantity of interest. Numerical results are presented to reaffirm our theory.
In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and … In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and density functions correspond to a quantity of interest involving the solution to an elliptic partial differential equation (PDE) with a lognormally distributed coefficient and a normally distributed source term. There is extensive previous work on using QMC to compute expected values in UQ, which have proven very successful in tackling a range of different PDE problems. However, the use of QMC for density estimation applied to UQ problems will be explored here for the first time. Density estimation presents a more difficult challenge compared to computing the expected value due to discontinuities present in the integral formulations of both the distribution and density. Our strategy is to use preintegration to eliminate the discontinuity by integrating out a carefully selected random parameter, so that QMC can be used to approximate the remaining integral. First, we establish regularity results for the PDE quantity of interest that are required for smoothing by preintegration to be effective. We then show that an $N$-point lattice rule can be constructed for the integrands corresponding to the distribution and density, such that after preintegration the QMC error is of order $\mathcal{O}(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. This is the same rate achieved for computing the expected value of the quantity of interest. Numerical results are presented to reaffirm our theory.
The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this … The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this paper, we compare two search criteria to construct lattice point sets for use in lattice-based kernel approximation. The first candidate, $\calP_n^*$, is based on the power function that appears in machine learning literature. The second, $\calS_n^*$, is a search criterion used for generating lattices for approximation using truncated Fourier series. We find that the empirical difference in error between the lattices constructed using $\calP_n^*$ and $\calS_n^*$ is marginal. The criterion $\calS_n^*$ is preferred as it is computationally more efficient and has a proven error bound.
In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The … In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The method involves preintegration with respect to one well chosen integration variable to obtain a smooth function of the remaining variables, followed by the application of a tailored lattice Quasi-Monte Carlo rule to integrate over the remaining variables.
This is a note on Math. Comp. 82 (2013), 383–400. We first report a mistake, in that the main result Theorem 3.1, though correct, does not as claimed apply to … This is a note on Math. Comp. 82 (2013), 383–400. We first report a mistake, in that the main result Theorem 3.1, though correct, does not as claimed apply to the Asian option pricing problem. This is because assumption (3.3) in the theorem is not satisfied by the Asian option pricing problem. In this note we present a strengthened theorem, which removes that assumption. The new theorem is immediately applicable to the Asian option pricing problem with the standard and Brownian bridge constructions. Thus the option pricing conclusions of our original paper stand.
We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck … We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"><mml:semantics><mml:msup><mml:mrow class="MJX-TeXAtom-ORD"><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msup><mml:annotation encoding="application/x-tex">\mathbb {R}^d</mml:annotation></mml:semantics></mml:math></inline-formula>, for<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d greater-than-or-equal-to 1"><mml:semantics><mml:mrow><mml:mi>d</mml:mi><mml:mo>≥</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:annotation encoding="application/x-tex">d \geq 1</mml:annotation></mml:semantics></mml:math></inline-formula>. Both spaces are Hilbert spaces involving<italic>weight functions</italic>, which determine the behaviour as different variables tend to<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="plus-or-minus normal infinity"><mml:semantics><mml:mrow><mml:mo>±</mml:mo><mml:mi mathvariant="normal">∞</mml:mi></mml:mrow><mml:annotation encoding="application/x-tex">\pm \infty</mml:annotation></mml:semantics></mml:math></inline-formula>, and<italic>weight parameters</italic>, which represent the influence of different subsets of variables. The unanchored ANOVA space on<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"><mml:semantics><mml:msup><mml:mrow class="MJX-TeXAtom-ORD"><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msup><mml:annotation encoding="application/x-tex">\mathbb {R}^d</mml:annotation></mml:semantics></mml:math></inline-formula>was initially introduced by Nichols and Kuo in 2014 to analyse the error of<italic>quasi-Monte Carlo</italic>(QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo and Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with<italic>kinks</italic>or<italic>jumps</italic>coming from option pricing problems. In this same setting, Griewank, Kuo, Leövey and Sloan in 2018 subsequently extended these ideas by developing a practical<italic>smoothing by preintegration</italic>technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper N"><mml:semantics><mml:mi>N</mml:mi><mml:annotation encoding="application/x-tex">N</mml:annotation></mml:semantics></mml:math></inline-formula>points converges at a rate close to<inline-formula content-type="math/mathml"><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="1 slash upper N"><mml:semantics><mml:mrow><mml:mn>1</mml:mn><mml:mrow class="MJX-TeXAtom-ORD"><mml:mo>/</mml:mo></mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:annotation encoding="application/x-tex">1/N</mml:annotation></mml:semantics></mml:math></inline-formula>.
Estimating the unknown density from which a given independent sample originates is more difficult than estimating the mean in the sense that, for the best popular nonparametric density estimators, the … Estimating the unknown density from which a given independent sample originates is more difficult than estimating the mean in the sense that, for the best popular nonparametric density estimators, the mean integrated square error converges more slowly than at the canonical rate of [Formula: see text]. When the sample is generated from a simulation model and we have control over how this is done, we can do better. We examine an approach in which conditional Monte Carlo yields, under certain conditions, a random conditional density that is an unbiased estimator of the true density at any point. By averaging independent replications, we obtain a density estimator that converges at a faster rate than the usual ones. Moreover, combining this new type of estimator with randomized quasi–Monte Carlo to generate the samples typically brings a larger improvement on the error and convergence rate than for the usual estimators because the new estimator is smoother as a function of the underlying uniform random numbers. Summary of Contribution: Stochastic simulation is commonly used to estimate the mathematical expectation of some output random variable X together with a confidence interval for this expectation. But the simulations usually provide information to do much more, such as estimating the entire distribution (or density) of X. Histograms are routinely provided by standard simulation software, but they are very primitive density estimators. Kernel density estimators perform better, but they are trickier to use, have bias, and their mean square error converges more slowly than the canonical rate of O(1/n) with n independent samples. In this paper, we explain how to construct unbiased density estimators that converge at the canonical rate and even much faster when combined with randomized quasi–Monte Carlo. The key idea is to use conditional Monte Carlo to hide appropriate information and obtain a computable (random) conditional density, which acts (under certain conditions) as an unbiased density estimator. Moreover, this sample density is typically smoother than the classic density estimators as a function of the underlying uniform random numbers, so it can get along much better with randomized quasi–Monte Carlo methods. This offers an opportunity to further improve the O(1/n) rate. We observe rates near O(1/n 2 ) on some examples, and we give conditions under which this type of rate provably holds. The proposed approach is simple, easy to implement, and extremely effective, so it provides a significant addition to the stochastic simulation toolbox.
This is a review article on lattice methods for multiple integration over the unit hypercube, with a variance-reduction viewpoint. It also contains some new results and ideas. The aim is … This is a review article on lattice methods for multiple integration over the unit hypercube, with a variance-reduction viewpoint. It also contains some new results and ideas. The aim is to examine the basic principles supporting these methods and how they can be used effectively for the simulation models that are typically encountered in the area of management science. These models can usually be reformulated as integration problems over the unit hypercube with a large (sometimes infinite) number of dimensions. We examine selection criteria for the lattice rules and suggest criteria which take into account the quality of the projections of the lattices over selected low-dimensional subspaces. The criteria are strongly related to those used for selecting linear congruential and multiple recursive random number generators. Numerical examples illustrate the effectiveness of the approach.
This paper provides the theoretical foundation for the component-by-component (CBC) construction of randomly shifted lattice rules that are tailored to integrals over Rs arising from practical applications. For an integral … This paper provides the theoretical foundation for the component-by-component (CBC) construction of randomly shifted lattice rules that are tailored to integrals over Rs arising from practical applications. For an integral of the form ∫Rsf(y)∏j=1sϕ(yj)dy with a univariate probability density ϕ, our general strategy is to first map the integral into the unit cube [0,1]s using the inverse of the cumulative distribution function of ϕ, and then apply quasi-Monte Carlo (QMC) methods. However, the transformed integrand in the unit cube rarely falls within the standard QMC setting of Sobolev spaces of functions with mixed first derivatives. Therefore, a non-standard function space setting for integrands over Rs, previously considered by Kuo, Sloan, Wasilkowski and Waterhouse (2010), is required for the analysis. Motivated by the needs of three applications, the present paper extends the theory of the aforementioned paper in several non-trivial directions, including a new error analysis for the CBC construction of lattice rules with general non-product weights, the introduction of an unanchored variant for the setting, the use of coordinate-dependent weight functions in the norm, and the strategy for fast CBC construction with POD ("product and order dependent") weights.
.In this paper, we analyze a method for approximating the distribution function and density of a random variable that depends in a nontrivial way on a possibly high number of … .In this paper, we analyze a method for approximating the distribution function and density of a random variable that depends in a nontrivial way on a possibly high number of independent random variables, each with support on the whole real line. Starting with the integral formulations of the distribution and density, the method involves smoothing the original integrand by preintegration with respect to one suitably chosen variable, and then applying a suitable quasi–Monte Carlo method to compute the integral of the resulting smoother function. Interpolation is then used to reconstruct the distribution or density on an interval. The preintegration technique is a special case of conditional sampling, a method that has previously been applied to a wide range of problems in statistics and computational finance. In particular, the pointwise approximation studied in this work is a specific case of the conditional density estimator previously considered by L'Ecuyer, Puchhammer, and Ben Abdellah INFORMS J. Comput., 34 (2022), pp. 1729–1748]. Our theory provides a rigorous regularity analysis of the preintegrated function, which is then used to show that the errors of the pointwise and interpolated estimators can both achieve nearly first-order convergence. Numerical results support the theory.Keywordsquasi–Monte Carlo methodsdensity estimationpreintegrationconditional samplingMSC codes65D3065D3262G07
.Preintegration is an extension of conditional Monte Carlo to quasi–Monte Carlo and randomized quasi–Monte Carlo. Conditioning can reduce but not increase the variance in Monte Carlo. For quasi–Monte Carlo it … .Preintegration is an extension of conditional Monte Carlo to quasi–Monte Carlo and randomized quasi–Monte Carlo. Conditioning can reduce but not increase the variance in Monte Carlo. For quasi–Monte Carlo it can bring about improved regularity of the integrand with potentially greatly improved accuracy. We show theoretically that, just as in Monte Carlo, preintegration can reduce but not increase the variance when one uses scrambled net integration. Preintegration is ordinarily done by integrating out one of the input variables to a function. In the common case of a Gaussian integral one can also preintegrate over any linear combination of variables. For continuous functions that are differentiable almost everywhere, we propose to choose the linear combination by the first principal component in an active subspace decomposition. We show that the lead eigenvector in an active subspace decomposition is closely related to the vector that maximizes a computationally intractable criterion using a Sobol' index. A numerical example of Asian option pricing finds that this active subspace preintegration strategy is competitive with preintegrating the first principal component of the Brownian motion, which is known to be very effective. The new method outperforms others on some basket and rainbow options where there is no generally accepted counterpart to the principal components construction.Keywordsconditional Monte Carlooption pricingquasi–Monte Carlorandomized quasi-Monte CarloMSC codes65C0565D30
Discontinuities and high dimensionality are common in the problems of pricing and hedging of derivative securities. Both factors have a tremendous impact on the accuracy of the quasi--Monte Carlo (QMC) … Discontinuities and high dimensionality are common in the problems of pricing and hedging of derivative securities. Both factors have a tremendous impact on the accuracy of the quasi--Monte Carlo (QMC) method. An ideal approach to improve the QMC method is to transform the functions to make them smoother and having smaller effective dimension. This paper develops a two-step procedure to tackle the challenging problems of both the discontinuity and the high dimensionality concurrently. In the first step, we adopt the smoothing method to remove the discontinuities of the payoff function, improving the smoothness. In the second step, we propose a general dimension reduction method (called the CQR method) to reduce the effective dimension such that the better quality of QMC points in their initial dimensions can be fully exploited. The CQR method is based on the combination of the $k$-means clustering algorithm and the QR decomposition. The $k$-means clustering algorithm, a classical algorithm of machine learning, is used to find some representative linear structures inherent in the function, which are used to construct a matching function of the smoothed function. The matching function serves as an approximation of the smoothed function but has a simpler form, and it is used to find the required transformation. Extensive numerical experiments on option pricing and Greeks estimation demonstrate that the combination of the smoothing method and dimension reduction in QMC achieves substantially larger variance reduction even in high dimension than dealing with either discontinuities or high dimensionality single sidedly.
We develop a conditional sampling scheme for pricing knock-out barrier options under the Linear Transformations (LT) algorithm from Imai and Tan (2006). We compare our new method to an existing … We develop a conditional sampling scheme for pricing knock-out barrier options under the Linear Transformations (LT) algorithm from Imai and Tan (2006). We compare our new method to an existing conditional Monte Carlo scheme from Glasserman and Staum (2001), and show that a substantial variance reduction is achieved. We extend the method to allow pricing knock-in barrier options and introduce a root-finding method to obtain a further variance reduction. The effectiveness of the new method is supported by numerical results.
In this paper quasi-Monte Carlo (QMC) methods are applied to a class of elliptic partial differential equations (PDEs) with random coefficients, where the random coefficient is parametrized by a countably … In this paper quasi-Monte Carlo (QMC) methods are applied to a class of elliptic partial differential equations (PDEs) with random coefficients, where the random coefficient is parametrized by a countably infinite number of terms in a Karhunen--Loève expansion. Models of this kind appear frequently in numerical models of physical systems, and in uncertainty quantification. The method uses a QMC method to estimate expected values of linear functionals of the exact or approximate solution of the PDE, with the expected value considered as an infinite dimensional integral in the parameter space corresponding to the randomness induced by the random coefficient. The error analysis, arguably the first rigorous application of the modern theory of QMC in weighted spaces, exploits the regularity with respect to both the physical variables (the variables in the physical domain) and the parametric variables (the parameters corresponding to randomness). In the weighted-space theory of QMC methods, “weights,” describing the varying difficulty of different subsets of the variables, are introduced in order to make sure that the high-dimensional integration problem is tractable. It turns out that the weights arising from the present analysis are of a nonstandard kind, being of neither product nor order dependent form, but instead a hybrid of the two---we refer to these as “product and order dependent weights,” or “POD weights” for short. These POD weights are of a simple enough form to permit a component-by-component construction of a randomly shifted lattice rule that has optimal convergence properties for the given weighted space setting. If the terms in the expansion for the random coefficient have an appropriate decay property, and if we choose POD weights that minimize a certain upper bound on the error, then the solution of the PDE belongs to the joint function space needed for the analysis, and the QMC error (in the sense of a root-mean-square error averaged over shifts) is of order $\mathcal{O}(N^{-1+\delta})$ for arbitrary $\delta>0$, where $N$ denotes the number of sampling points in the parameter space. Moreover, this convergence rate and slower convergence rates under more relaxed conditions are achieved under conditions similar to those found recently by Cohen, De Vore, and Schwab [Found. Comput. Math., 10 (2010), pp. 615--646] for the same model by best $N$-term approximation. We analyze the impact of a finite element (FE) discretization on the overall efficiency of the scheme, in terms of accuracy versus overall cost, with results that are comparable to those of the best $N$-term approximation.
This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where … This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1] s , where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst-case error as s increases. A crucial role is played by parameters called ‘weights’, since a careful use of the weight parameters is needed to ensure that the worst-case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy, all of which are discussed with an appropriate level of detail.
We consider a finite element approximation of elliptic partial differential equations with random coefficients. Such equations arise, for example, in uncertainty quantification in subsurface flow modeling. Models for random coefficients … We consider a finite element approximation of elliptic partial differential equations with random coefficients. Such equations arise, for example, in uncertainty quantification in subsurface flow modeling. Models for random coefficients frequently used in these applications, such as log-normal random fields with exponential covariance, have only very limited spatial regularity and lead to variational problems that lack uniform coercivity and boundedness with respect to the random parameter. In our analysis we overcome these challenges by a careful treatment of the model problem almost surely in the random parameter, which then enables us to prove uniform bounds on the finite element error in standard Bochner spaces. These new bounds can then be used to perform a rigorous analysis of the multilevel Monte Carlo method for these elliptic problems that lack full regularity and uniform coercivity and boundedness. To conclude, we give some numerical results that confirm the new bounds.
We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines … We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines a high level of expressiveness with efficient computation. Finite element variational forms may be expressed in near mathematical notation, from which low-level code is automatically generated, compiled and seamlessly integrated with efficient implementations of computational meshes and high-performance linear algebra. Easy-to-use object-oriented interfaces to the library are provided in the form of a C++ library and a Python module. This paper discusses the mathematical abstractions and methods used in the design of the library and its implementation. A number of examples are presented to demonstrate the use of the library in application code.
The pricing problem for a continuous path-dependent option results in a path integral which can be recast into an infinite-dimensional integration problem. We study ANOVA decomposition of a function of … The pricing problem for a continuous path-dependent option results in a path integral which can be recast into an infinite-dimensional integration problem. We study ANOVA decomposition of a function of infinitely many variables arising from the Brownian bridge formulation of the continuous option pricing problem. We show that all resulting ANOVA terms can be smooth in this infinite-dimensional case, despite the non-smoothness of the underlying payoff function. This result may explain why quasi-Monte Carlo methods or sparse grid quadrature techniques work for such option pricing problems.
Abstract Quasi–Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a log-normal random coefficient is considered. The random coefficient is assumed to be given by … Abstract Quasi–Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a log-normal random coefficient is considered. The random coefficient is assumed to be given by an exponential of a Gaussian random field that is represented by a series expansion of some system of functions. Graham et al. (2015, Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients. Numer. Math., 131, 329–368) developed a lattice-based QMC theory for this problem and established a quadrature error decay rate ≈ 1 with respect to the number of quadrature points. The key assumption there was a suitable summability condition on the aforementioned system of functions. As a consequence, product-order-dependent weights were used to construct the lattice rule. In this paper, a different assumption on the system is considered. This assumption, originally considered by Bachmayr et al. (2017c, Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients. ESAIM Math. Model. Numer. Anal., 51, 321–339) to utilise the locality of support of basis functions in the context of polynomial approximations applied to the same type of the diffusion problem, is shown to work well in the same lattice-based QMC method considered by Graham et al.: the assumption leads us to product weights, which enables the construction of the QMC method with a smaller computational cost than Graham et al. A quadrature error decay rate ≈ 1 is established, and the theory developed here is applied to a wavelet stochastic model. By a characterisation of the Besov smoothness, it is shown that a wide class of path smoothness can be treated with this framework.
This paper provides the theoretical foundation for the construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight … This paper provides the theoretical foundation for the construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight parameters. Our construction leads to an error bound that achieves the best possible rate of convergence for lattice algorithms. This work is motivated by PDE applications in which bounds on the norm of the functions to be approximated require special forms of weight parameters (so-called POD weights or SPOD weights), as opposed to the simple product weights covered by the existing literature. Our result can be applied to other lattice-based approximation algorithms, including kernel methods or splines.
In a recent paper by the same authors, we provided a theoretical foundation for the component-by-component (CBC) construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, … In a recent paper by the same authors, we provided a theoretical foundation for the component-by-component (CBC) construction of lattice algorithms for multivariate $L_2$ approximation in the worst case setting, for functions in a periodic space with general weight parameters. The construction led to an error bound that achieves the best possible rate of convergence for lattice algorithms. Previously available literature covered only weights of a simple form commonly known as
We propose and analyze a quasi-Monte Carlo (QMC) algorithm for efficient simulation of wave propagation modeled by the Helmholtz equation in a bounded region in which the refractive index is … We propose and analyze a quasi-Monte Carlo (QMC) algorithm for efficient simulation of wave propagation modeled by the Helmholtz equation in a bounded region in which the refractive index is random and spatially heterogenous. Our focus is on the case in which the region can contain multiple wavelengths. We bypass the usual sign-indefiniteness of the Helmholtz problem by switching to an alternative sign-definite formulation recently developed by Ganesh and Morgenstern Numer. Algorithms, 83 (2020), pp. 1441--1487. The price to pay is that the regularity analysis required for QMC methods becomes much more technical. Nevertheless we obtain a complete analysis with error comprising stochastic dimension truncation error, finite element error, and cubature error, with results comparable to those obtained for the diffusion problem.
In the first part we have shown that, for L2-approximation of functions from a separable Hilbert space in the worst-case setting, linear algorithms based on function values are almost as … In the first part we have shown that, for L2-approximation of functions from a separable Hilbert space in the worst-case setting, linear algorithms based on function values are almost as powerful as arbitrary linear algorithms if the linear widths are square-summable. That is, they achieve the same polynomial rate of convergence. In this sequel, we prove a similar result for separable Banach spaces and other classes of functions.
When approximating the expectations of a functional of a solution to a stochastic differential equation, the numerical performance of deterministic quadrature methods, such as sparse grid quadrature and quasi-Monte Carlo … When approximating the expectations of a functional of a solution to a stochastic differential equation, the numerical performance of deterministic quadrature methods, such as sparse grid quadrature and quasi-Monte Carlo (QMC) methods, may critically depend on the regularity of the integrand. To overcome this issue and improve the regularity structure of the problem, we consider cases in which analytic smoothing (bias-free mollification) cannot be performed and introduce a novel numerical smoothing approach by combining a root-finding method with a one-dimensional numerical integration with respect to a single well-chosen variable. We prove that, under appropriate conditions, the resulting function of the remaining variables is highly smooth, potentially affording the improved efficiency of adaptive sparse grid quadrature (ASGQ) and QMC methods, particularly when combined with hierarchical transformations (ie., the Brownian bridge and Richardson extrapolation on the weak error). This approach facilitates the effective treatment of high dimensionality. Our study is motivated by option pricing problems, focusing on dynamics where the discretization of the asset price is necessary. Based on our analysis and numerical experiments, we demonstrate the advantages of combining numerical smoothing with the ASGQ and QMC methods over these methods without smoothing and the Monte Carlo approach. Finally, our approach is generic and can be applied to solve a broad class of problems, particularly approximating distribution functions, computing financial Greeks, and estimating risk quantities.
Abstract This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice—a setting that, as pointed out by Zeng et … Abstract This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice—a setting that, as pointed out by Zeng et al. (Monte Carlo and Quasi-Monte Carlo Methods 2004, Springer, New York, 2006) and Zeng et al. (Constr. Approx. 30: 529–555, 2009), allows fast evaluation by fast Fourier transform, so avoiding the need for a linear solver. The main contribution of the paper is the application to the approximation problem for uncertainty quantification of elliptic partial differential equations, with the diffusion coefficient given by a random field that is periodic in the stochastic variables, in the model proposed recently by Kaarnioja et al. (SIAM J Numer Anal 58(2): 1068–1091, 2020). The paper gives a full error analysis, and full details of the construction of lattices needed to ensure a good (but inevitably not optimal) rate of convergence and an error bound independent of dimension. Numerical experiments support the theory.
The quasi-Monte Carlo method for financial valuation and other integration problems has error bounds of size O((log N)k N-1), or even O((log N)k N-3/2), which suggests significantly better performance than … The quasi-Monte Carlo method for financial valuation and other integration problems has error bounds of size O((log N)k N-1), or even O((log N)k N-3/2), which suggests significantly better performance than the error size O(N-1/2) for standard Monte Carlo. But in high-dimensional problems, this benefit might not appear at feasible sample sizes. Substantial improvements from quasi-Monte Carlo integration have, however, been reported for problems such as the valuation of mortgage-backed securities, in dimensions as high as 360. The authors believe that this is due to a lower effective dimension of the integrand in those cases. This paper defines the effective dimension and shows in examples how the effective dimension may be reduced by using a Brownian bridge representation.
For a class $F$ of complex-valued functions on a set $D$, we denote by $g_n(F)$ its sampling numbers, i.e., the minimal worst-case error on $F$, measured in $L_2$, that can … For a class $F$ of complex-valued functions on a set $D$, we denote by $g_n(F)$ its sampling numbers, i.e., the minimal worst-case error on $F$, measured in $L_2$, that can be achieved with a recovery algorithm based on $n$ function evaluations. We prove that there is a universal constant $c\in\mathbb{N}$ such that, if $F$ is the unit ball of a separable reproducing kernel Hilbert space, then \[ g_{cn}(F)^2 \,\le\, \frac{1}{n}\sum_{k\geq n} d_k(F)^2, \] where $d_k(F)$ are the Kolmogorov widths (or approximation numbers) of $F$ in $L_2$. We also obtain similar upper bounds for more general classes $F$, including all compact subsets of the space of continuous functions on a bounded domain $D\subset \mathbb{R}^d$, and show that these bounds are sharp by providing examples where the converse inequality holds up to a constant. The results rely on the solution to the Kadison-Singer problem, which we extend to the subsampling of a sum of infinite rank-one matrices.
We study the recovery of functions in various norms, including $L_p$ with $1\le p\le\infty$, based on function evaluations. We obtain worst case error bounds for general classes of functions in … We study the recovery of functions in various norms, including $L_p$ with $1\le p\le\infty$, based on function evaluations. We obtain worst case error bounds for general classes of functions in terms of the best $L_2$-approximation from a given nested sequence of subspaces and the Christoffel function of these subspaces. In the case $p=\infty$, our results imply that linear sampling algorithms are optimal up to a constant factor for many reproducing kernel Hilbert spaces.
This paper is concerned with function reconstruction from samples. The sampling points used in several approaches are (1) structured points connected with fast algorithms or (2) unstructured points coming from, … This paper is concerned with function reconstruction from samples. The sampling points used in several approaches are (1) structured points connected with fast algorithms or (2) unstructured points coming from, e.g., an initial random draw to achieve an improved information complexity. We connect both approaches and propose a subsampling of structured points in an offline step. In particular, we start with quasi-Monte Carlo (QMC) points with inherent structure and stable <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper L 2"> <mml:semantics> <mml:msub> <mml:mi>L</mml:mi> <mml:mn>2</mml:mn> </mml:msub> <mml:annotation encoding="application/x-tex">L_2</mml:annotation> </mml:semantics> </mml:math> </inline-formula> reconstruction properties. The subsampling procedure consists of a computationally inexpensive random step followed by a deterministic procedure to further reduce the number of points while keeping its information. In these points functions (belonging to a reproducing kernel Hilbert space of bounded functions) will be sampled and reconstructed from whilst achieving state of the art error decay. Our method is dimension-independent and is applicable as soon as we know some initial quadrature points. We apply our general findings on the <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d"> <mml:semantics> <mml:mi>d</mml:mi> <mml:annotation encoding="application/x-tex">d</mml:annotation> </mml:semantics> </mml:math> </inline-formula>-dimensional torus to subsample rank-1 lattices, where it is known that full rank-1 lattices lose half the optimal order of convergence (expressed in terms of the size of the lattice). In contrast to that, our subsampled version regains the optimal rate since many of the lattice points are not needed. Moreover, we utilize fast and memory efficient Fourier algorithms in order to compute the approximation. Numerical experiments in several dimensions support our findings.
A comprehensive overview of lattice rules and polynomial lattice rules is given for function spaces based on $\ell_p$ semi-norms. Good lattice rules and polynomial lattice rules are defined as those … A comprehensive overview of lattice rules and polynomial lattice rules is given for function spaces based on $\ell_p$ semi-norms. Good lattice rules and polynomial lattice rules are defined as those obtaining worst-case errors bounded by the optimal rate of convergence for the function space. The focus is on algebraic rates of convergence $O(N^{-\alpha+\epsilon})$ for $\alpha \ge 1$ and any $\epsilon > 0$, where $\alpha$ is the decay of a series representation of the integrand function. The dependence of the implied constant on the dimension can be controlled by weights which determine the influence of the different dimensions. Different types of weights are discussed. The construction of good lattice rules, and polynomial lattice rules, can be done using the same method for all $1 < p \le \infty$; but the case $p=1$ is special from the construction point of view. For $1 < p \le \infty$ the component-by-component construction and its fast algorithm for different weighted function spaces is then discussed.
Abstract This is the first book devoted to lattice methods, a recently developed way of calculating multiple integrals in many variables. Multiple integrals of this kind arise in fields such … Abstract This is the first book devoted to lattice methods, a recently developed way of calculating multiple integrals in many variables. Multiple integrals of this kind arise in fields such as quantum physics and chemistry, statistical mechanics, Bayesian statistics and many others. Lattice methods are an effective tool when the number of integrals are large. The book begins with a review of existing methods before presenting lattice theory in a thorough, self-contained manner, with numerous illustrations and examples. Group and number theory are included, but the treatment is such that no prior knowledge is needed. Not only the theory but the practical implementation of lattice methods is covered. An algorithm is presented alongside tables not available elsewhere, which together allow the practical evaluation of multiple integrals in many variables. Most importantly, the algorithm produces an error estimate in avery efficent manner. the book also provides a fast track for readers wanting to move rapidly to using methods in practical calculations. It concludes with extensive numerical test which compare lattice methods with other methods, such as the Monte Carlo.