Estimation of (near) low-rank matrices with noise and high-dimensional scaling

Type: Article

Publication Date: 2011-04-01

Citations: 559

DOI: https://doi.org/10.1214/10-aos850

Abstract

We study an instance of high-dimensional inference in which the goal is to estimate a matrix Θ∗∈ℝm1×m2 on the basis of N noisy observations. The unknown matrix Θ∗ is assumed to be either exactly low rank, or “near” low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider a standard M-estimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under high-dimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes and recovery of low-rank matrices from random projections. These results involve nonasymptotic random matrix theory to establish that the RSC condition holds, and to determine an appropriate choice of regularization parameter. Simulation results show excellent agreement with the high-dimensional scaling of the error predicted by our theory.

Locations

  • The Annals of Statistics - View - PDF
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Estimation of (near) low-rank matrices with noise and high-dimensional scaling 2009 Sahand Negahban
Martin J. Wainwright
+ Estimation of (near) low-rank matrices with noise and high-dimensional scaling 2009 Sahand Negahban
Martin J. Wainwright
+ Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions 2012 Alekh Agarwal
Sahand Negahban
Martin J. Wainwright
+ High-dimensional covariance matrix estimation with missing observations 2012 Karim Lounici
+ High-dimensional covariance matrix estimation with missing observations 2012 Karim Lounici
+ Fast global convergence of gradient methods for high-dimensional statistical recovery 2012 Alekh Agarwal
Sahand Negahban
Martin J. Wainwright
+ Robust Sparse Reduced Rank Regression in High Dimensions 2018 Kean Ming Tan
Qiang Sun
Daniela Witten
+ Robust Sparse Reduced Rank Regression in High Dimensions 2018 Kean Ming Tan
Qiang Sun
Daniela Witten
+ PDF Chat Estimation of high-dimensional low-rank matrices 2011 Angelika Rohde
Alexandre B. Tsybakov
+ Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion 2011 Vladimir Koltchinskii
Karim Lounici
Alexandre B. Tsybakov
+ A Unified Framework for High-Dimensional Analysis of $M$-Estimators with Decomposable Regularizers 2012 Sahand Negahban
Pradeep Ravikumar
Martin J. Wainwright
Bin Yu
+ Low Permutation-rank Matrices: Structural Properties and Noisy Completion 2017 Nihar B. Shah
Sivaraman Balakrishnan
Martin J. Wainwright
+ PDF Chat Robust matrix estimations meet Frank–Wolfe algorithm 2023 Naimin Jing
Ethan X. Fang
Cheng Yong Tang
+ Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation 2018 Yudong Chen
Yuejie Chi
+ ROP: Matrix recovery via rank-one projections 2014 Tommaso Cai
Anru R. Zhang
+ Low Permutation-rank Matrices: Structural Properties and Noisy Completion 2019 Nihar B. Shah
Sivaraman Balakrishnan
Martin J. Wainwright
+ Low Permutation-rank Matrices: Structural Properties and Noisy Completion 2017 Nihar B. Shah
Sivaraman Balakrishnan
Martin J. Wainwright
+ Fast global convergence of gradient methods for high-dimensional statistical recovery 2011 Alekh Agarwal
Sahand Negahban
Martin J. Wainwright
+ Fast global convergence of gradient methods for high-dimensional statistical recovery 2011 Alekh Agarwal
Sahand Negahban
Martin J. Wainwright
+ On Low-rank Trace Regression under General Sampling Distribution. 2019 Nima Hamidi
Mohsen Bayati