Search for a command to run...
Abstract Under certain conditions (known as the restricted isometry property , or RIP) on the m × N matrix Φ (where m < N ), vectors x ∈ ℝ N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Φ x even though Φ −1 ( y ) is typically an ( N − m )—dimensional hyperplane; in addition, x is then equal to the element in Φ −1 ( y ) of minimal 𝓁 1 ‐norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x , as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w , the element in Φ −1 ( y ) with smallest 𝓁 2 ( w )‐norm. If x ( n ) is the solution at iteration step n , then the new weight w ( n ) is defined by w := [| x | 2 + ε ] −1/2 , i = 1, …, N , for a decreasing sequence of adaptively defined ε n ; this updated weight is then used to obtain x ( n + 1) and the process is repeated. We prove that when Φ satisfies the RIP conditions, the sequence x ( n ) converges for all y , regardless of whether Φ −1 ( y ) contains a sparse vector. If there is a sparse vector in Φ −1 ( y ), then the limit is this sparse vector, and when x ( n ) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast ( linear convergence in the terminology of numerical optimization). The same algorithm with the “heavier” weight w = [| x | 2 + ε ] −1+τ/2 , i = 1, …, N , where 0 < τ < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching 0. © 2009 Wiley Periodicals, Inc.
Published in: Communications on Pure and Applied Mathematics
Volume 63, Issue 1, pp. 1-38
DOI: 10.1002/cpa.20303