Rank Revealing QR Methods for Sparse Block Low Rank Solvers

Type: Preprint

Publication Date: 2019-07-11

Citations: 0

View Chat PDF

Abstract

Solving linear equations of type Ax=b for large sparse systems frequently emerges in science and engineering applications, which creates the main bottleneck. In spite that the direct methods are costly in time and memory consumption, they are still the most robust way to solve these systems. Nowadays, increasing the amount of computational units for the supercomputers became trendy, while the memory available per core is reduced. Therefore, when solving these linear equations, memory reduction becomes as important as time reduction. For this purpose, compression methods of dense blocks appearing inside sparse matrix solvers have been introduced to reduce the memory consumption, as well as the time to solution. While looking for the lowest possible compression rank, Singular Value Decomposition (SVD) gives the best result. It is however too costly as the whole factorization is computed to find the resulting rank. In this respect, rank revealing QR decomposition variants are less costly, but can introduce larger ranks. Among these variants, column pivoting or matrix rotation can be applied on the matrix A, such that the most important information in the matrix is gathered to the leftmost columns and the remaining unnecessary information can be omitted. For reducing the communication cost of the classical QR decomposition with column pivoting, blocking versions with randomization are suggested as an alternative solution to find the pivots. In these randomized variants, the matrix A is projected on a much lower dimensional matrix by using an independent and identically distributed Gaussian matrix so that the pivoting/rotational matrix can be computed on the lower dimensional matrix. In addition, to avoid unnecessary updates of the trailing matrix at each iteration, a truncated randomized method is suggested and shown to be more efficient for larger matrix sizes. Thanks to these methods, closer results to SVD can be obtained and the cost of compression can be reduced. In this presentation, a comparison of all these methods in terms of complexity, numerical stability and performance will be presented.

Locations

  • HAL (Le Centre pour la Communication Scientifique Directe) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat A Randomized Blocked Algorithm for Efficiently Computing Rank-revealing Factorizations of Matrices 2016 Per‐Gunnar Martinsson
Sergey Voronin
+ A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices 2015 Per‐Gunnar Martinsson
Sergey Voronin
+ A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices 2015 Per‐Gunnar Martinsson
Sergey Voronin
+ PDF Chat Randomized block Gram-Schmidt process for solution of linear systems and eigenvalue problems 2021 Oleg Balabanov
Laura Grigori
+ Structure-Preserving and Rank-Revealing QR-Factorizations 1991 Christian Bischof
Per Christian Hansen
+ The PowerURV algorithm for computing rank-revealing full factorizations 2018 Abinand Gopal
Per‐Gunnar Martinsson
+ The PowerURV algorithm for computing rank-revealing full factorizations 2018 Abinand Gopal
Per‐Gunnar Martinsson
+ PDF Chat Minimizing communication for incomplete factorizations and low-rank approximations on large scale computers 2019 Sébastien Cayrols
+ PDF Chat Computing rank‐revealing factorizations of matrices stored out‐of‐core 2023 Nathan Heavner
Per‐Gunnar Martinsson
Gregorio Quintana-Ortı́
+ PDF Chat Randomized Rank-Revealing Qlp for Low-Rank Matrix Decomposition 2022 Maboud F. Kaloorazi
Kai Liu
Jie Chen
Rodrigo C. de Lamare
Susanto Rahardja
+ Randomized Rank-Revealing QLP for Low-Rank Matrix Decomposition 2022 Maboud F. Kaloorazi
Kai Liu
Jie Chen
Rodrigo C. de Lamare
Susanto Rahardja
+ High Efficiency Spectral Analysis and BLAS-3 Randomized QRCP with Low-Rank Approximations 2015 Jed A. Duersch
+ PDF Chat Fast truncated SVD of sparse and dense matrices on graphics processors 2023 Andrés E. Tomás
Enrique S. Quintana–Ort́ı
Hartwig Anzt
+ Efficient randomized algorithms for adaptive low-rank factorizations of large matrices. 2016 Yu Gu
Wenjian Yu
Yaohang Li
+ Computing rank-revealing factorizations of matrices stored out-of-core 2020 Nathan Heavner
Per‐Gunnar Martinsson
Gregorio Quintana-Ortı́
+ Randomized Cholesky QR factorizations 2022 Oleg Balabanov
+ Literature survey on low rank approximation of matrices 2016 N. Kishore Kumar
Jan Shneider
+ Literature survey on low rank approximation of matrices 2016 N. Kishore Kumar
Jan Shneider
+ PDF Chat Probabilistic error analysis of CholeskyQR based on columns 2024 H. Guan
Yuwei Fan
+ Randomized block Gram-Schmidt process for solution of linear systems and eigenvalue problems 2021 Oleg Balabanov
Laura Grigori

Cited by (0)

Action Title Year Authors

Citing (0)

Action Title Year Authors