Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

Type: Article

Publication Date: 2006-12-01

Citations: 6787

DOI: https://doi.org/10.1109/tit.2006.885507

Abstract

<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> Suppose we are given a vector &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$f$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; in a class &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;${\cal F} \subset{\BBR}^N$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt;, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$f$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; to be able to recover &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$f$&lt;/tex&gt; &lt;/formula&gt;&lt;/emphasis&gt; to within precision &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$\epsilon$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; in the Euclidean &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$(\ell_2)$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$f$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$n$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt;th largest entry of the vector &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$\vert f\vert$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; (or of its coefficients in a fixed basis) obeys &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$\vert f\vert _{(n)} \le R \cdot n^{-1/p}$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt;, where &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$R &gt; 0$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; and &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$p &gt; 0$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt;. Suppose that we take measurements &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$y_k = \langle f, X_k\rangle, k = 1, \ldots, K$&lt;/tex&gt; &lt;/formula&gt;&lt;/emphasis&gt;, where the &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$X_k$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; are &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$N$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt;-dimensional Gaussian vectors with independent standard normal entries. Then for each &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$f$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; obeying the decay estimate above for some &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$0 &lt; p &lt; 1$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; and with overwhelming probability, our reconstruction &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$f^\sharp$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt;, defined as the solution to the constraints &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$y_k = \langle f^\sharp, X_k \rangle$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; with minimal &lt;emphasis&gt;&lt;formula formulatype="inline"&gt; &lt;tex&gt;$\ell_1$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; norm, obeys &lt;emphasis&gt; &lt;formula formulatype="display"&gt;&lt;tex&gt;$$ \Vert f - f^\sharp\Vert _{\ell_2} \le C_p \cdot R \cdot (K/\log N)^{-r}, \quad r = 1/p - 1/2. $$&lt;/tex&gt; &lt;/formula&gt;&lt;/emphasis&gt;There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$K$&lt;/tex&gt;&lt;/formula&gt;&lt;/emphasis&gt; measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of &lt;emphasis&gt;&lt;formula formulatype="inline"&gt;&lt;tex&gt;$f$&lt;/tex&gt; &lt;/formula&gt;&lt;/emphasis&gt;. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed. </para>

Locations

  • IEEE Transactions on Information Theory - View
  • arXiv (Cornell University) - View - PDF
  • IEEE Transactions on Information Theory - View
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? 2004 Emmanuel J. Candès
Terence Tao
+ Practical Signal Recovery from Random Projections 2005 Justin Romberg
+ Compressed sensing 2006 David L. Donoho
+ PDF Chat From compression to compressed sensing 2013 Shirin Jalali
Arian Maleki
+ Exact Recovery of Sparsely Used Overcomplete Dictionaries. 2013 Alekh Agarwal
Animashree Anandkumar
Praneeth Netrapalli
+ PDF Exact Recovery Conditions for Sparse Representations With Partial Support Information 2013 Cédric Herzet
Charles Soussen
Jérôme Idier
Rémi Gribonval
+ Recovery of partly sparse and dense signals 2023 Izuru Miyazaki
+ Reconstruction of compressed samples in shift-invariant space base on matrix factorization 2011 Zhaoxuan Zhu
Houjun Wang
Zhigang Wang
Hao Zhang
+ On Practical Approximate Projection Schemes in Signal Space Methods 2016 Deanna Needell
+ PDF Chat Learning-Based Compressive Subsampling 2016 Luca Baldassarre
Yen-Huan Li
Jonathan Scarlett
Baran Gözcü
Ilija Bogunovic
Volkan Cevher
+ PDF Chat Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing 2018 Peng Zhang
Lu Gan
Cong Ling
Sumei Sun
+ On exact recovery of sparse vectors from linear measurements 2013 Sergeĭ Konyagin
Yu. V. Malykhin
K. S. Ryutin
+ Modern Problems in Mathematical Signal Processing: Quantized Compressed Sensing and Randomized Neural Networks 2019 Aaron A. Nelson
+ Sampling Without Input Constraints: Consistent Reconstruction in Arbitrary Spaces 2004 Yonina C. Eldar
+ The restricted isometry property meets nonlinear approximation with redundant frames 2012 Rémi Gribonval
Morten Nielsen
+ PDF Chat A Robust Deep Unfolded Network for Sparse Signal Recovery from Noisy Binary Measurements 2020 Yuqing Yang
Peng Xiao
Bin Liao
Nikos Deligiannis
+ Recovery of the optimal approximation from samples in wavelet subspace 2012 Zhiguo Zhang
Yue Li
+ On the Trade-Off Between Bit Depth and Number of Samples for a Basic Approach to Structured Signal Recovery From &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;$b$ &lt;/tex-math&gt; &lt;/inline-formula&gt;-Bit Quantized Linear Measurements 2018 Martin Slawski
Ping Li
+ A note on practical approximate projection schemes in signal space methods 2015 Xiaoyi Gu
Deanna Needell
Shenyinying Tu
+ Dense and Sparse Coding: Theory and Architectures. 2020 Abiy Tasissa
Emmanouil Theodosis
Bahareh Tolooshams
Demba E. Ba