Type: Article
Publication Date: 2006-12-01
Citations: 6787
DOI: https://doi.org/10.1109/tit.2006.885507
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> Suppose we are given a vector <emphasis><formula formulatype="inline"> <tex>$f$</tex></formula></emphasis> in a class <emphasis><formula formulatype="inline"> <tex>${\cal F} \subset{\BBR}^N$</tex></formula></emphasis>, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about <emphasis><formula formulatype="inline"><tex>$f$</tex></formula></emphasis> to be able to recover <emphasis><formula formulatype="inline"><tex>$f$</tex> </formula></emphasis> to within precision <emphasis><formula formulatype="inline"> <tex>$\epsilon$</tex></formula></emphasis> in the Euclidean <emphasis><formula formulatype="inline"><tex>$(\ell_2)$</tex></formula></emphasis> metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct <emphasis><formula formulatype="inline"> <tex>$f$</tex></formula></emphasis> to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the <emphasis><formula formulatype="inline"><tex>$n$</tex></formula></emphasis>th largest entry of the vector <emphasis><formula formulatype="inline"><tex>$\vert f\vert$</tex></formula></emphasis> (or of its coefficients in a fixed basis) obeys <emphasis><formula formulatype="inline"><tex>$\vert f\vert _{(n)} \le R \cdot n^{-1/p}$</tex></formula></emphasis>, where <emphasis><formula formulatype="inline"> <tex>$R > 0$</tex></formula></emphasis> and <emphasis><formula formulatype="inline"> <tex>$p > 0$</tex></formula></emphasis>. Suppose that we take measurements <emphasis><formula formulatype="inline"><tex>$y_k = \langle f, X_k\rangle, k = 1, \ldots, K$</tex> </formula></emphasis>, where the <emphasis><formula formulatype="inline"> <tex>$X_k$</tex></formula></emphasis> are <emphasis><formula formulatype="inline"> <tex>$N$</tex></formula></emphasis>-dimensional Gaussian vectors with independent standard normal entries. Then for each <emphasis><formula formulatype="inline"> <tex>$f$</tex></formula></emphasis> obeying the decay estimate above for some <emphasis><formula formulatype="inline"><tex>$0 < p < 1$</tex></formula></emphasis> and with overwhelming probability, our reconstruction <emphasis><formula formulatype="inline"> <tex>$f^\sharp$</tex></formula></emphasis>, defined as the solution to the constraints <emphasis><formula formulatype="inline"><tex>$y_k = \langle f^\sharp, X_k \rangle$</tex></formula></emphasis> with minimal <emphasis><formula formulatype="inline"> <tex>$\ell_1$</tex></formula></emphasis> norm, obeys <emphasis> <formula formulatype="display"><tex>$$ \Vert f - f^\sharp\Vert _{\ell_2} \le C_p \cdot R \cdot (K/\log N)^{-r}, \quad r = 1/p - 1/2. $$</tex> </formula></emphasis>There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of <emphasis><formula formulatype="inline"><tex>$K$</tex></formula></emphasis> measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of <emphasis><formula formulatype="inline"><tex>$f$</tex> </formula></emphasis>. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed. </para>