Singular vectors of sums of rectangular random matrices and optimal estimation of high-rank signals: The extensive spike model

Type: Article

Publication Date: 2023-11-20

Citations: 2

DOI: https://doi.org/10.1103/physreve.108.054129

Abstract

Across many disciplines spanning from neuroscience and genomics to machine learning, atmospheric science, and finance, the problems of denoising large data matrices to recover hidden signals obscured by noise, and of estimating the structure of these signals, is of fundamental importance. A key to solving these problems lies in understanding how the singular value structure of a signal is deformed by noise. This question has been thoroughly studied in the well-known spiked matrix model, in which data matrices originate from low-rank signal matrices perturbed by additive noise matrices, in an asymptotic limit where matrix size tends to infinity but the signal rank remains finite. We first show, strikingly, that the singular value structure of large finite matrices (of size $\ensuremath{\sim}1000$) with even moderate-rank signals, as low as 10, is not accurately predicted by the finite-rank theory, thereby limiting the application of this theory to real data. To address these deficiencies, we analytically compute how the singular values and vectors of an arbitrary high-rank signal matrix are deformed by additive noise. We focus on an asymptotic limit corresponding to an $\mathit{extensive}$ spike model, in which both the signal rank and the size of the data matrix tend to infinity at a constant ratio. We map out the phase diagram of the singular value structure of the extensive spike model as a joint function of signal strength and rank. We further exploit these analytics to derive optimal rotationally invariant denoisers to recover the hidden $\mathit{high}$-rank signal from the data, as well as optimal invariant estimators of the signal covariance structure. Our extensive-rank results yield several conceptual differences compared to the finite-rank case: (1) as signal strength increases, the singular value spectrum does not directly transition from a unimodal bulk phase to a disconnected phase, but instead there is a bimodal connected regime separating them; (2) the signal singular vectors can be partially estimated $\mathit{even}$ in the unimodal bulk regime, and thus the transitions in the data singular value spectrum do not coincide with a detectability threshold for the signal singular vectors, unlike in the finite-rank theory; (3) signal singular values interact nontrivially to generate data singular values in the extensive-rank model, whereas they are noninteracting in the finite-rank theory; and (4) as a result, the more sophisticated data denoisers and signal covariance estimators we derive, which take into account these nontrivial extensive-rank interactions, significantly outperform their simpler, noninteracting, finite-rank counterparts, even on data matrices of only moderate rank. Overall, our results provide fundamental theory governing how high-dimensional signals are deformed by additive noise, together with practical formulas for optimal denoising and covariance estimation.

Locations

Similar Works

Action Title Year Authors
+ Singular Vectors of Sums of Rectangular Random Matrices and Optimal Estimators of High-Rank Signals: The Extensive Spike Model 2023 Itamar Daniel Landau
Gabriel C. Mel
Surya Ganguli
+ PDF Chat Matrix denoising with partial noise statistics: optimal singular value shrinkage of spiked F-matrices 2023 Matan Gavish
William Leeb
Elad Romanov
+ On the Noise Sensitivity of the Randomized SVD 2023 Elad Romanov
+ Matrix Denoising with Partial Noise Statistics: Optimal Singular Value Shrinkage of Spiked F-Matrices 2022 Matan Gavish
William Leeb
Elad Romanov
+ PDF Chat Spiked singular values and vectors under extreme aspect ratios 2023 Michael J. Feldman
+ Optimal Eigenvalue Shrinkage in the Semicircle Limit 2022 David L. Donoho
Michael J. Feldman
+ PCA from noisy, linearly reduced data: the diagonal case 2016 Edgar Dobriban
William Leeb
Amit Singer
+ Stable Autoencoding: A Flexible Framework for Regularized Low-rank Matrix Estimation 2015 Julie Josse
Stefan Wager
+ Spiked Singular Values and Vectors under Extreme Aspect Ratios 2021 Michael J. Feldman
+ PDF Chat OptShrink: An Algorithm for Improved Low-Rank Signal Matrix Denoising by Optimal, Data-Driven Singular Value Shrinkage 2014 Raj Rao Nadakuditi
+ A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation 2024 Hugo Lebeau
Florent Chatelain
Romain Couillet
+ PDF Chat Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise 2024 Jean Barbier
Francesco Camilli
Marco Mondelli
Yizhou Xu
+ Fundamental limits in structured principal component analysis and how to reach them 2023 Jean Barbier
Francesco Camilli
Marco Mondelli
Manuel Sáenz
+ PDF Chat Long random matrices and tensor unfolding 2023 GĂ©rard Ben Arous
Daniel Zhengyu Huang
Jiaoyang Huang
+ On Estimating Rank-One Spiked Tensors in the Presence of Heavy Tailed Errors 2021 Arnab Auddy
Ming Yuan
+ Entrywise Estimation of Singular Vectors of Low-Rank Matrices with Heteroskedasticity and Dependence 2021 Joshua Agterberg
Zachary Lubberts
Carey E. Priebe
+ Entrywise Estimation of Singular Vectors of Low-Rank Matrices With Heteroskedasticity and Dependence 2022 Joshua Agterberg
Zachary Lubberts
Carey E. Priebe
+ PDF Chat Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods 2024 Yihan Zhang
Marco Mondelli
+ Long Random Matrices and Tensor Unfolding 2021 GĂ©rard Ben Arous
Daniel Zhengyu Huang
Jiaoyang Huang
+ Bayesian Extensive-Rank Matrix Factorization with Rotational Invariant Priors 2023 Farzad Pourkamali
Nicolas Macris

Works That Cite This (1)

Action Title Year Authors
+ Matrix Denoising: Bayes-Optimal Estimators Via Low-Degree Polynomials 2024 Guilhem Semerjian