Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach

Type: Article

Publication Date: 2012-09-04

Citations: 449

DOI: https://doi.org/10.1109/tit.2012.2207945

Abstract

This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n/s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.

Locations

  • arXiv (Cornell University) - View - PDF
  • IEEE Transactions on Information Theory - View

Similar Works

Action Title Year Authors
+ Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach 2012 Yaniv Plan
Roman Vershynin
+ Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach 2012 Yaniv Plan
Roman Vershynin
+ PDF Chat Robust 1-bit Compressed Sensing with Iterative Hard Thresholding 2024 Namiko Matsumoto
Arya Mazumdar
+ Robust 1-bit Compressed Sensing with Iterative Hard Thresholding 2023 Namiko Matsumoto
Arya Mazumdar
+ Compressed Sensing with Adversarial Sparse Noise via L1 Regression 2018 Sushrut Karmalkar
Eric Price
+ PDF Chat Pinball loss minimization for one-bit compressive sensing: Convex models and algorithms 2018 Xiaolin Huang
Lei Shi
Ming Yan
Johan A. K. Suykens
+ Support Recovery of Sparse Signals from a Mixture of Linear Measurements 2021 Venkata Gandikota
Arya Mazumdar
Soumyabrata Pal
+ One-Bit Compressed Sensing via One-Shot Hard Thresholding 2020 Jie Shen
+ One-Bit Compressed Sensing via One-Shot Hard Thresholding 2020 Jie Shen
+ Robust Decoding from 1-Bit Compressive Sampling with Least Squares 2017 Jian Huang
Yuling Jiao
Xiliang Lu
Li Zhu
+ One-Bit Compressive Sensing of Dictionary-Sparse Signals 2016 R.G. Baraniuk
Simon Foucart
Deanna Needell
Yaniv Plan
Mary Wootters
+ One-Bit Compressive Sensing of Dictionary-Sparse Signals 2016 R.G. Baraniuk
Simon Foucart
Deanna Needell
Yaniv Plan
Mary Wootters
+ PDF Chat Connections between sparse estimation and robust statistical learning 2013 Efthymios Tsakonas
Joakim Jaldén
Nicholas D. Sidiropoulos
Björn Ottersten
+ Shannon Theoretic Limits on Noisy Compressive Sampling 2007 Mehmet Akçakaya
Vahid Tarokh
+ One-bit compressed sensing by linear programming 2011 Yaniv Plan
Roman Vershynin
+ One-bit compressed sensing by linear programming 2011 Yaniv Plan
Roman Vershynin
+ PDF Chat One-bit compressive sensing of dictionary-sparse signals 2017 Richard G. Baraniuk
Simon Foucart
Deanna Needell
Yaniv Plan
Mary Wootters
+ One-bit compressed sensing with non-Gaussian measurements 2012 Albert Ai
Alex Lapanowski
Yaniv Plan
Roman Vershynin
+ Optimal linear estimation under unknown nonlinear transform 2015 Xinyang Yi
Zhaoran Wang
Constantine Caramanis
Han Liu
+ Optimal linear estimation under unknown nonlinear transform 2015 Xinyang Yi
Zhaoran Wang
Constantine Caramanis
Han Liu

Works That Cite This (282)

Action Title Year Authors
+ Linear regression with sparsely permuted data 2019 Martin Slawski
Emanuel Ben‐David
+ Generalized high-dimensional trace regression via nuclear norm regularization 2019 Jianqing Fan
Wenyan Gong
Ziwei Zhu
+ Fast Signal Recovery From Saturated Measurements by Linear Loss and Nonconvex Penalties 2018 Fan He
Xiaolin Huang
Yipeng Liu
Ming Yan
+ Approximate Message Passing With Parameter Estimation for Heavily Quantized Measurements 2022 Shuai Huang
Deqiang Qiu
Trac D. Tran
+ Sparse Multinomial Logistic Regression via Approximate Message Passing 2016 Evan Byrne
Philip Schniter
+ PDF Chat An extended Newton-type algorithm for <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="d1e2442" altimg="si3.svg"><mml:msub><mml:mrow><mml:mi>ℓ</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>-regularized sparse logistic regression and its efficiency for classifying large-scale datasets 2021 Rui Wang
Naihua Xiu
Shenglong Zhou
+ Quickly Finding the Best Linear Model in High Dimensions via Projected Gradient Descent 2020 Yahya Sattar
Samet Oymak
+ PDF Chat A one-bit, comparison-based gradient estimator 2022 HanQin Cai
Daniel McKenzie
Wotao Yin
Zhenliang Zhang
+ PDF Chat Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors 2013 Laurent Jacques
Jason N. Laska
Petros T. Boufounos
Richard G. Baraniuk
+ One-Bit Radar Processing With Time-Varying Sampling Thresholds 2019 Aria Ameri
Arindam Bose
Jian Li
Mojtaba Soltanalian