High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence

Type: Article

Publication Date: 2011-01-01

Citations: 773

DOI: https://doi.org/10.1214/11-ejs631

Abstract

Given i.i.d. observations of a random vector X∈ℝp, we study the problem of estimating both its covariance matrix Σ*, and its inverse covariance or concentration matrix Θ*=(Σ*)−1. When X is multivariate Gaussian, the non-zero structure of Θ* is specified by the graph of an associated Gaussian Markov random field; and a popular estimator for such sparse Θ* is the ℓ1-regularized Gaussian MLE. This estimator is sensible even for for non-Gaussian X, since it corresponds to minimizing an ℓ1-penalized log-determinant Bregman divergence. We analyze its performance under high-dimensional scaling, in which the number of nodes in the graph p, the number of edges s, and the maximum node degree d, are allowed to grow as a function of the sample size n. In addition to the parameters (p,s,d), our analysis identifies other key quantities that control rates: (a) the ℓ∞-operator norm of the true covariance matrix Σ*; and (b) the ℓ∞ operator norm of the sub-matrix Γ*SS, where S indexes the graph edges, and Γ*=(Θ*)−1⊗(Θ*)−1; and (c) a mutual incoherence or irrepresentability measure on the matrix Γ* and (d) the rate of decay 1/f(n,δ) on the probabilities {|Σ̂nij−Σ*ij|>δ}, where Σ̂n is the sample covariance based on n samples. Our first result establishes consistency of our estimate Θ̂ in the elementwise maximum-norm. This in turn allows us to derive convergence rates in Frobenius and spectral norms, with improvements upon existing results for graphs with maximum node degrees $d=o(\sqrt{s})$. In our second result, we show that with probability converging to one, the estimate Θ̂ correctly specifies the zero pattern of the concentration matrix Θ*. We illustrate our theoretical results via simulations for various graphs and problem parameters, showing good correspondences between the theoretical predictions and behavior in simulations.

Locations

  • CiteSeer X (The Pennsylvania State University) - View - PDF
  • Electronic Journal of Statistics - View - PDF

Similar Works

Action Title Year Authors
+ High-dimensional covariance estimation by minimizing $\ell_1$-penalized log-determinant divergence 2008 Pradeep Ravikumar
Martin J. Wainwright
Garvesh Raskutti
Bin Yu
+ High-dimensional covariance estimation based on Gaussian graphical models 2010 Shuheng Zhou
Philipp Rütimann
Min Xu
Peter Bühlmann
+ High-dimensional Covariance Estimation Based On Gaussian Graphical Models 2011 Shuheng Zhou
Philipp Rütimann
Min Xu
Peter Bühlmann
+ High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods 2011 Christopher C. Johnson
Ali Jalali
Pradeep Ravikumar
+ High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods 2011 Christopher C. Johnson
Ali Jalali
Pradeep Ravikumar
+ High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods 2012 Christopher C. Johnson
Ali Jalali
Pradeep Ravikumar
+ High-dimensional Covariance Estimation Based On Gaussian Graphical Models 2011 ZhouShuheng
RütimannPhilipp
XuMin
BühlmannPeter
+ Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of boldmathell_1-regularized MLE 2008 Garvesh Raskutti
Bin Yu
Martin J. Wainwright
Pradeep Ravikumar
+ High-dimensional graph estimation and density estimation 2016 Zhe Liu
+ Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation 2013 Cho‐Jui Hsieh
Mátyás A. Sustik
Inderjit S. Dhillon
Pradeep Ravikumar
+ PDF Chat The highest dimensional stochastic blockmodel with a regularized estimator 2014 Karl Rohe
Tai Qin
Haoyang Fan
+ A convex framework for high-dimensional sparse Cholesky based covariance estimation 2016 Kshitij Khare
Sang Min Oh
Syed M Rahman
Bala Rajaratnam
+ A convex framework for high-dimensional sparse Cholesky based covariance estimation 2016 Kshitij Khare
S. H. Oh
Syed M Rahman
Bala Rajaratnam
+ PDF Chat Two new algorithms for maximum likelihood estimation of sparse covariance matrices with applications to graphical modeling 2024 Ghania Fatima
Prabhu Babu
Petre Stoica
+ The Highest Dimensional Stochastic Blockmodel with a Regularized Estimator 2012 Karl Rohe
Tai Qin
Haoyang Fan
+ Sparse Inverse Covariance Estimation via an Adaptive Gradient-Based Method 2011 Suvrit Sra
Dong‐Min Kim
+ Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation 2011 Cho‐Jui Hsieh
Inderjit S. Dhillon
Pradeep Ravikumar
Mátyás A. Sustik
+ Error Analysis on Graph Laplacian Regularized Estimator 2019 Kaige Yang
Xiaowen Dong
Laura Toni
+ Gemini: Graph estimation with matrix variate normal instances 2014 Shuheng Zhou
+ Generalized Stability Approach for Regularized Graphical Models 2016 Christian L. Müller
Richard Bonneau
Zachary Kurtz