Search for a command to run...
This paper introduces a novel algorithm to approximate the matrix with minimum \nnuclear norm among all matrices obeying a set of convex constraints. This problem may be understood \nas the convex relaxation of a rank minimization problem and arises in many important \napplications as in the task of recovering a large matrix from a small subset of its entries (the famous \nNetflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable \nto large problems of this kind with over a million unknown entries. This paper develops a simple \nfirst-order and easy-to-implement algorithm that is extremely efficient at addressing problems in \nwhich the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices \n{X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values \nof the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix \ncompletion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; \nthe second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow \nthe algorithm to make use of very minimal storage space and keep the computational cost of each \niteration low. On the theoretical side, we provide a convergence analysis showing that the sequence \nof iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000 \nmatrices are recovered in less than a minute on a modest desktop computer. We also demonstrate \nthat our approach is amenable to very large scale problems by recovering matrices of rank about \n10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are \nconnected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we \ndevelop a framework in which one can understand these algorithms in terms of well-known Lagrange \nmultiplier algorithms.