- Journal Home
- Volume 38 - 2025
- Volume 37 - 2025
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 38 (2025), pp. 1389-1416.
Published online: 2025-09
Cited by
- BibTex
- RIS
- TXT
The principal goal of 1-bit compressive sampling is to decode $n$-dimensional signals with a sparsity level of $s$ from $m$ binary measurements. This task presents significant challenges due to nonlinearity, noise, and sign flips. In this paper, we propose the use of the cardinality-constrained least squares decoder as an optimal solution. We establish that, with high probability, the proposed decoder achieves a minimax estimation error, up to a constant $c$, as long as $m ≥ \mathcal{O}(s{\rm log} \ n)$. In terms of computational efficiency, we employ a generalized Newton algorithm (GNA) to solve the cardinality-constrained minimization problem. At each iteration, this approach incurs the cost of solving a least squares problem with a small size. Through rigorous analysis, we demonstrate that, with high probability, the $ℓ_∞$ norm of the estimation error between the output of GNA and the underlying target diminishes to $\mathcal{O}( \sqrt{\frac{{\rm log} \ n}{m}})$ after at most $\mathcal{O}({\rm log} \ s)$ iterations. Furthermore, provided that the target signal is detectable, we can recover the underlying support with high probability within $\mathcal{O}({\rm log} \ s)$ steps. To showcase the robustness of our proposed decoder and the efficiency of the GNA algorithm, we present extensive numerical simulations and comparisons with state-of-the-art methods.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2023-0285}, url = {http://global-sci.org/intro/article_detail/cicp/24461.html} }The principal goal of 1-bit compressive sampling is to decode $n$-dimensional signals with a sparsity level of $s$ from $m$ binary measurements. This task presents significant challenges due to nonlinearity, noise, and sign flips. In this paper, we propose the use of the cardinality-constrained least squares decoder as an optimal solution. We establish that, with high probability, the proposed decoder achieves a minimax estimation error, up to a constant $c$, as long as $m ≥ \mathcal{O}(s{\rm log} \ n)$. In terms of computational efficiency, we employ a generalized Newton algorithm (GNA) to solve the cardinality-constrained minimization problem. At each iteration, this approach incurs the cost of solving a least squares problem with a small size. Through rigorous analysis, we demonstrate that, with high probability, the $ℓ_∞$ norm of the estimation error between the output of GNA and the underlying target diminishes to $\mathcal{O}( \sqrt{\frac{{\rm log} \ n}{m}})$ after at most $\mathcal{O}({\rm log} \ s)$ iterations. Furthermore, provided that the target signal is detectable, we can recover the underlying support with high probability within $\mathcal{O}({\rm log} \ s)$ steps. To showcase the robustness of our proposed decoder and the efficiency of the GNA algorithm, we present extensive numerical simulations and comparisons with state-of-the-art methods.