A dual‐space multilevel kernel‐splitting framework for discrete and continuous convolution

Type: Article

Publication Date: 2024-12-12

Citations: 0

DOI: https://doi.org/10.1002/cpa.22240

Abstract

Abstract We introduce a new class of multilevel, adaptive, dual‐space methods for computing fast convolutional transformations. These methods can be applied to a broad class of kernels, from the Green's functions for classical partial differential equations (PDEs) to power functions and radial basis functions such as those used in statistics and machine learning. The DMK ( dual‐space multilevel kernel‐splitting ) framework uses a hierarchy of grids, computing a smoothed interaction at the coarsest level, followed by a sequence of corrections at finer and finer scales until the problem is entirely local, at which point direct summation is applied. Unlike earlier multilevel summation schemes, DMK exploits the fact that the interaction at each scale is diagonalized by a short Fourier transform, permitting the use of separation of variables, but without relying on the FFT. This requires careful attention to the discretization of the Fourier transform at each spatial scale. Like multilevel summation, we make use of a recursive (telescoping) decomposition of the original kernel into the sum of a smooth far‐field kernel, a sequence of difference kernels, and a residual kernel, which plays a role only in leaf boxes in the adaptive tree. At all higher levels in the grid hierarchy, the interaction kernels are designed to be smooth in both physical and Fourier space, admitting efficient Fourier spectral approximations. The DMK framework substantially simplifies the algorithmic structure of the fast multipole method (FMM) and unifies the FMM, Ewald summation, and multilevel summation, achieving speeds comparable to the FFT in work per gridpoint, even in a fully adaptive context. For continuous source distributions, the evaluation of local interactions is further accelerated by approximating the kernel at the finest level as a sum of Gaussians (SOG) with a highly localized remainder. The Gaussian convolutions are calculated using tensor product transforms, and the remainder term is calculated using asymptotic methods. We illustrate the performance of DMK for both continuous and discrete sources with extensive numerical examples in two and three dimensions.

Locations

  • Communications on Pure and Applied Mathematics - View - PDF
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ A Dual-space Multilevel Kernel-splitting Framework for Discrete and Continuous Convolution 2023 Leslie Greengard
Shidong Jiang
+ A new version of the adaptive fast Gauss transform for discrete and continuous sources 2023 Leslie Greengard
Shidong Jiang
Manas Rachh
Jun Wang
+ Fast multilevel sparse Gaussian kernels for high-dimensional approximation and integration 2015 Zhaonan Dong
Emmanuil H. Georgoulis
Jeremy Levesley
Fuat Usta
+ Solving High Frequency and Multi-Scale PDEs with Gaussian Processes 2023 Shikai Fang
Madison Cooley
Da Long
Shibo Li
Robert Kirby
Shandian Zhe
+ PDF Chat Multi-Scale Deep Neural Network (MscaleDNN) for Solving Poisson-Boltzmann Equation in Complex Domains 2020 Ziqi Liu
Wei Cai
Zhi-Qin John Xu Zhi-Qin John Xu
+ PDF Chat M2NO: Multiresolution Operator Learning with Multiwavelet-based Algebraic Multigrid Method 2024 Zhihao Li
Zhilu Lai
Xiaobo Wang
Wei Wang
+ The Fast Kernel Transform 2021 John Paul Ryan
Sebastian Ament
Carla P. Gomes
Anil Damle
+ The Fast Kernel Transform. 2021 John Paul Ryan
Sebastian Ament
Carla P. Gomes
Anil Damle
+ Deep Generalized Green's Functions 2023 Rixi Peng
Juncheng Dong
Jordan M. Malof
Willie J. Padilla
Vahid Tarokh
+ An adaptive fast Gauss transform in two dimensions 2017 Jun Wang
Leslie Greengard
+ PDF Chat An Adaptive Fast Gauss Transform in Two Dimensions 2018 Jun Wang
Leslie Greengard
+ Adaptive Deep Fourier Residual method via overlapping domain decomposition 2024 Jamie M. Taylor
Manuela Bastidas
Victor M. Calo
David Pardo
+ Neural Operator: Learning Maps Between Function Spaces. 2021 Nikola B. Kovachki
Zongyi Li
Burigede Liu
Kamyar Azizzadenesheli
Kaushik Bhattacharya
Andrew M. Stuart
Anima Anandkumar
+ Adaptive Deep Fourier Residual method via overlapping domain decomposition 2024 Jamie M. Taylor
Manuela Bastidas
Victor M. Calo
David Pardo
+ PDF Chat Frequency-adaptive Multi-scale Deep Neural Networks 2024 Jizu Huang
Rukang You
Tao Zhou
+ PDF Chat Scaling Continuous Kernels with Sparse Fourier Domain Learning 2024 Clayton Harper
Luke Wood
Peter Gerstoft
Eric C. Larson
+ PDF Chat Separable Physics-informed Neural Networks for Solving the BGK Model of the Boltzmann Equation 2024 Jaemin Oh
Seung Yeon Cho
Seok-Bae Yun
Eunbyung Park
Youngjoon Hong
+ PDF Chat Separable Physics-Informed Neural Networks for Solving the Bgk Model of the Boltzmann Equation 2024 Jaemin Oh
Seung Yeon Cho
Seok-Bae Yun
Eunbyung Park
Youngjoon Hong
+ PDF Chat Dilated convolution neural operator for multiscale partial differential equations 2024 Bo Xu
Xinliang Liu
Lei Zhang
+ PDF Chat Kernel Neural Operators (KNOs) for Scalable, Memory-efficient, Geometrically-flexible Operator Learning 2024 Matthew Lowery
John Turnage
Zachary Morrow
John Jakeman
Akil Narayan
Shandian Zhe
Varun Shankar

Works That Cite This (0)

Action Title Year Authors