![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
The low-rank matrix completion theory based on nuclear-norm minimization is becoming increasingly popular in practice, and has been applied to seismic data reconstruction of missing traces. In this paper, we investigate the weighted nuclear-norm minimization model on low-rank seismic texture matrix which is obtained with a designed texture-patch pre-transformation. Unlike the previous nuclear-norm model that treats all singular values of the seismic data matrix equally, the weighted nuclear-norm model, based on the idea of iterative reweighting, makes a closer approximation to the ‘counting rank function’ and can promote the low-rank matrices. We apply weighted spectral soft-thresholding iterative scheme and weighted iterative supported detection method to solve the weighted model. Experimental study on synthetic and real seismic data shows that high-quality reconstruction can be obtained by the weighted low-rank methods.
1 Introduction
Seismic data are often irregularly or sparsely sampled along spatial coordinates, which is caused by the presence of surface obstacles, dead or severely contaminated traces, instrument, no permit area and economic constraints. Reconstruction and interpolation of missing data in seismic records is a critical element of the data processing chain, and has become a main issue in seismic research. The reconstruction quality may directly affect the subsequent processing steps such as migration, multiple suppression and amplitude vs. azimuth analysis.
Many different techniques have been proposed to reconstruct missing traces. In the last several years, sparsity-promoting -norm minimization methods have been successfully applied to seismic data reconstruction of missing traces. They commonly require the sparse representation of the data in a transformation domain, such as Radon transform,[Citation1] Fourier transform [Citation2, Citation3] and Curvelet transform.[Citation4, Citation5]
Besides the sparsity-promoting method, rank-reduction methods have been proposed for the seismic data reconstruction with missing traces. The rank-reduction methods mainly fail in two different categories: Cadzow filtering and nuclear-norm minimization. The Cadzow methods first transfer the data into frequency-space ( f-x) domain, then generate a Hankel matrix from the spatial samples for each frequency, then finally perform a rank-reduced eigen-decomposition of the Hankel matrix. This method guarantees that Hankel matrix is low rank for seismic data with linear events. However, the method pays huge computational cost because one has to repeatedly perform the Hankel matrix for every frequency. The Cadzow method was first used for seismic interpolation by Trickett et al. [Citation6], and then extended to high dimensional [Citation7] and de-aliased reconstruction.[Citation8]
The low-rank matrix completion theory based on nuclear-norm minimization is becoming increasingly popular in practice, and has been applied to seismic data reconstruction with missing traces. Yang et al. [Citation9] discovered the connection between seismic data reconstruction and the general matrix completion problem, and first formulated the seismic data reconstruction as a nuclear-norm minimization problem through building a pre-transformation. Recently, Ma [Citation10] extended the low-rank matrix completion with a designed texture-patch pre-transformation to three-dimensional seismic data reconstruction. The matrix completion problem [Citation11, Citation12] of finding a lowest rank matrix from a subset of its entries is one important special case for recovering a low-rank matrix in many applications. But the matrix completion model is generally NP-hard due to the combinational nature of the rank function. As known, the nuclear-norm minimization is a convex relaxation for the original problem of rank minimization.[Citation13] Then, the matrix completion problem can be partially justified through a proper nuclear-norm minimization model. Nowadays, many efficient algorithms have been proposed to solve the nuclear-norm minimization, including singular value thresholding algorithm (IST or SVT) [Citation14] using soft-thresholding operations on the singular values of a certain matrix at each iteration, accelerated proximal gradient method [Citation15] based on a fast iterative shrinkage-thresholding algorithm for compressed sensing, the augmented Lagrangian and alternating direction methods [Citation16] and fixed-point continuation with Bregman iteration.[Citation17] All of these algorithms bear the computational cost required by singular value decompositions at each iteration, which is very expensive particularly for large-scale problems. Generally, we can apply the low-rank factorization model (low-rank matrix fitting [LMaFit]) [Citation18] mainly based on the idea of non-linear successive over-relaxation algorithm, which avoids the singular value decomposition at each iteration. This method is usually much faster than the singular value decomposition based methods.[Citation9]
From an algorithmic viewpoint, one central idea of the above mentioned most solution methods is the convex relaxation of the -functional (the function giving the number of non-zero elements of a vector) and of the rank function. A weak point of the nuclear-norm minimization algorithms is that it treats all singular values of the sampled matrix equally. Our work is inspired by [Citation19] that extended the principle of iterative weighting of
-norm into the matrix completion problem. Since the weighted nuclear-norm minimization model, based on the idea of iterative reweighting, makes a closer approximation to the ‘counting rank function’ when the chosen weights are close to the singular values of the matrix. Thus, it promotes lower rank matrices.
In this paper, we also consider two-dimensional seismic data reconstruction with missing traces. We follow Yang et al’s method [Citation9] that uses the texture-patch mapping, but consider the weighted nuclear-norm minimization model to complete the low-rank texture-patch matrix in order to obtain the high-quality reconstruction. However, the weighted nuclear-norm is generally non-convex. We define a new approximate model formulation by adding a ridge term to the weighted nuclear-norm penalization. Algorithmically, we introduce weighted spectral soft-thresholding iterative scheme (WSST) and weighted iterative supported detection method (WISD) to approximate the minimizer of the resulting weighted model. It is compared numerically on seismic data reconstruction to the existing algorithms such as IST [Citation14] and LMaFit.[Citation18]
This paper is organized as follows. In Section 2, we give basic definition on the matrix completion and introduce a brief review on the construction of low-rank texture-patch matrix. Section 3 presents a weighted nuclear-norm minimization model and provides two efficient algorithms to approximate this weighted model. Numerical experiments are carried out to evaluate the performance of our proposed methods for seismic data reconstruction in Section 4. Finally, we draw conclusions in Section 5.
2 Matrix completion and pre-transformation
Matrix completion problem can be understood as a generalization of compressed sensing of vectors, which has been attracting a lot of researchers quite recently, see [Citation11, Citation12, Citation20, Citation21]. The aim is to fill in the missing values of a matrix from sufficient number of known entries with the assumption that it is of low-rank. According to the matrix completion theory, one can recover the randomly missing data by solving the following rank-minimization model:2.1
2.1 where
is the original seismic data that we want to reconstruct,
is not fully observed/sampled seismic data with missing traces,
stands for the rank of the matrix
(i.e. the number of non-zero singular values of matrix
) and
denotes an index subset corresponding to the observed entries with cardinality
.
We define the masking operator as follows:
Here,
stands for the projection on the observation subspace of matrices with non-zeros restricted to index subset
. Usually noise is unavoidable in the measurements, so the above model (Equation2.1
2.1
2.1 ) is often relaxed to the nuclear-norm regularized unconstraint optimization problem:
2.2
2.2 where
is the Frobenius norm, i.e. the square root of the sum of squares of the elements,
is a properly tuned parameter.
However, the above models are unfortunately NP-hard and known to be very hard to solve in practice even for small matrices, which can stem from non-convexity and discontinuity of the rank function. As known, the nuclear-norm (i.e. the -norm of the singular values), which we denote by
, is the convex envelope of the rank function over the unit ball of the operator norm, and is a widely used surrogate of the rank function to induce low-rank solutions for many optimization problems with matrix variables. Then, the above problems can be efficiently solved under suitable assumptions on
by considering their convex relaxations:
2.3
2.3 where
,
denotes the
th largest singular value of
and
are in decreasing order, or
2.4
2.4 In the following, the vector of the singular values of the matrix
is denoted by
, sorted in non-increasing order. It has been shown by Candes and Retch [Citation11] that the above convex relaxations can exactly recover the original sparse and low-rank matrix under quite general conditions. It will be found that the relationship between the nuclear-norm and the rank function of a matrix is analogical to the relationship between
-norm and
-norm of a vector in the area of sparse signal recovery. The solution to the regularized version (Equation2.4
2.4
2.4 ) is given by the iterative singular value soft-thresholding scheme (IST):
2.5
2.5 where
is the soft-thresholding operator defined for every
by applying
to the singular values of
, i.e.
with a diagonal matrix
.
In the compressed sensing framework, it is known that one necessary condition for successfully applying the sparsity-promoting inversion method to improve the recovery is that the signal or image of interest is sparse or compressible in the transformation domain. Analogously, for the matrix completion framework, since we consider the case where a observed number of entries is comparably much smaller than the total number
of entries, the matrix completion problem is in general severely ill-posed. In order to obtain a meaningful result, one needs to impose a sparsity assumption on the original data
, which is done by assuming that
is of low rank. If the original data is not of low rank, we should find a pre-transformation to organize the original data such that it leads to a low-rank presentation.
In this paper, we work with the original data using the so-called texture transformation, which is defined as base texture [Citation22] and applied to seismic data recovery in [Citation9]. Below two are the motivations to use the texture transformation. First, the seismic sampling matrix with missing traces results in some ‘zero’ columns, but according to the matrix completion theory, the entries must be sufficiently and uniformly distributed, that is to say, one needs at least one non-zero entry per column to have any hope of achieving successful reconstruction. The texture transformation can reshape the original data with missing traces to a new format with missing pixels, avoiding these ‘zero’ columns. Second, after texture transformation the new matrix has low rank. In the following, we briefly describe the texture transformation by defining a texture-patch matrix. For the given original data , we first construct its block matrix
consisting of the sub-matrices
as follows.[Citation22]
Next, all columns of
are rearranged into one column vector with the length
, denoted by
. Then, the texture-patch matrix is defined as follows
where the operator
is called the patch mapping for recognizing a new matrix. The obtained texture-patch matrix
is with lower rank than X itself.[Citation22] We can reconstruct
by recovering
with the proper matrix completion algorithms.
We first reconstruct with the following matrix completion model:
2.6
2.6 and then obtain the reconstructed original matrix
by inverse transformation
.
Denoting by the reorganized mapping, (Equation2.6
2.6
2.6 ) can be formulated as
2.7
2.7 For the sake of writing simplicity, in the next section we omit the blow.
3 The weighted model for matrix completion
In this section, we consider the weighted nuclear-norm minimization model: for a given weight vector with
,
3.1
3.1 where
denotes the weighted sum of singular values of
, e.g.
, here named by the weighted nuclear-norm.
Since the weighting aim is to promote low-rank matrices, the weight vector should be chosen non-increasingly. Assuming that
is a solution of (Equation2.4
2.4
2.4 ), now we would like to use the idea of reweighting with the weighting estimates for instance
and find a solution to the problem (Equation3.1
3.1
3.1 ) for this choice of weights. We note that, when the weight vector
is close to
,
is close to
. That is to say, this weighting makes a closer approximation to the ‘counting rank function’. So it can promote the low-rank matrices and will allow us to obtain better reconstructed results. Unfortunately, let us stress the fact that, while we call
the weighted nuclear-norm, it is indeed not a norm since it is typically non-convex in general, leading that (Equation3.1
3.1
3.1 ) cannot be used easily as an objective function. Consequently, (Equation3.1
3.1
3.1 ) is not a convex minimization problem in general, and a minimization algorithm is very likely to admit several locally optimal solutions. Hence there are no efficient methods to solve it. In order to alleviate the problem of local minima and to obtain a meaningful solution, the first idea that may come to mind is to use a convex relaxation of non-convex function
( just as convex relaxation of the rank function to the nuclear-norm). Then a ridge term is added to the weighted nuclear-norm penalization, and a new approximate estimator model formulation is defined as follows:
3.2
3.2 for any
. The minimizer of (Equation3.2
3.2
3.2 ) exists and is unique this time (more details see Proposition 2 of [Citation19]), since the objective function is now strongly convex.
Proposition 2
[Citation19] Let ,
and
. Then the minimization problem
has a unique solution, given by
, where
denotes the weighted soft-shrinkage operator with
Hence the solution of (Equation3.23.2
3.2 ) can be formulated as the following iteration scheme:
3.3
3.3 The parameter
can be arbitrarily small, which ensures the uniqueness and convergence of the iterative scheme (Equation3.3
3.3
3.3 ). It needs emphasizing the fact that the solution by the iterative scheme (Equation3.3
3.3
3.3 ) with
is not equivalent to the solution of problem (Equation3.1
3.1
3.1 ), since the weighted nuclear-norm
is not convex.
In the following, two effective algorithms are described to solve this convex relaxation weighted model (Equation3.23.2
3.2 ) in details to approximate the solution of (Equation3.1
3.1
3.1 ), starting from a given initial approximation. One is the WSST proposed by Gaiffas et al. [Citation19]. The algorithm is outlined in Algorithm 1. We first obtain a solution of the nuclear-norm minimization problem (Equation2.4
2.4
2.4 ), denoted by
. Then we update the weights by taking
, and we start all over. We use the iteration scheme (Equation3.3
3.3
3.3 ) to find the approximate solution with only repeating the process of updating the weights. Through this process, we are typically going to get the goal of decreasing the final rank of WSST algorithm, while keeping a good reconstruction accuracy.
The other one is the WISD algorithm outlined in Algorithm 2, which is regarded as an iterative weighted algorithm with a non-uniform weighting scheme for dealing with (Equation3.23.2
3.2 ). It can be seen that the difference between WSST and WISD algorithms lies in the use of adaptively changed weights according to ‘the first jump rule’.[Citation23] In fact, at iteration
, the singular values
are sorted in ascending order, and then we compute the difference between each two adjacent elements. The rule looks for the smallest
such that
with
. This amounts to looking for the first jump larger than
. We update the thresholding
with ‘the first jump rule’, and then update the detected support
. Moreover, we choose for the weights the simple relationship
for the singular values in
. This choice guarantees that the weights for those singular values in
are smaller than weights for singular values in
. The advantage of this choice is that the weights for all singular values are adaptively changed.
Note that in Algorithms 1 and 2 the target value of is simply taken as
, with
or
depending on the problem. We use the iterations (Equation3.3
3.3
3.3 ) with relatively smaller
, i.e.
, otherwise the problem will drop into local minimum easily. We use a simple stopping rule
with
or
also depending on the scaling of the problem.
4 Numerical results
In this section, we test the above proposed weighted model with WSST and WISD algorithms on one synthetic single-short prestack seismic data and two real poststack seismic data, in comparison with the nuclear-norm minimization model with the singular value soft-thresholding algorithm (IST) [Citation24] and the LMaFit.[Citation18] In this paper, we only consider random irregular missing traces on regular grids. The jittered sampling pattern [Citation5] is used for the location of missing traces. The corrupted version with 50% randomly missing traces for seismic data-set is used. The computational parameters are taken empirically to achieve best results in all our tests.
We define the signal-to-noise ratio (SNR):to estimate the reconstruction quality, where
and
denote the original data and its reconstruction, respectively. As another performance measure we also compare the reconstruction of one single trace, i.e. one column of the data matrix. Therefore, in the next we will present the single trace of the original data that is missed in observed data (in red dashed line) and the same trace of reconstruction by IST, LMaFit, WSST and WISD (in blue solid line).
One synthetic single-short prestack seismic data with the size of , denoted by
and shown in Figure (a), is firstly tested to show the performance of our approach. The corrupted data with 50% randomly missing traces for
is shown in Figure (b). And white Gaussian noise is considered to be added to the sampling data. Let
denote the noise level. For
, we take
for both WSST and WISD algorithms. Since the visual difference in the reconstructed images is not obvious, we omit presenting the reconstructed images. In the left-half part of Figure , we show the comparison of the 100th trace taken from the original data and the reconstructions for the noise-free case (
) by different algorithms. (a)–(d) are obtained via IST, LMaFit, WSST and WISD, respectively. To further illustrate the reconstructed results more clearly, the right-half part in each subfigure presents the corresponding single trace error, i.e. the difference between the 100th trace of the original data and the same trace of reconstruction. It is clear from the right-half part of Figure that the improvement by the weighted low-rank methods (i.e. WSST and WISD) can be seen obviously.
Figure 2. Reconstructions for under
. (a)–(d): Reconstructed 100th single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the 100th trace comparison, while the right-half part is the corresponding reconstruction error comparison.
![Figure 2. Reconstructions for X1 under δ=0. (a)–(d): Reconstructed 100th single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the 100th trace comparison, while the right-half part is the corresponding reconstruction error comparison.](/cms/asset/ef506102-ded7-4394-979c-68c6f149a0d1/gipe_a_890616_f0002_oc.gif)
Table 1. The SNR and CPU times of reconstructions for by four different algorithms with three different noise levels.
Table 2. The SNR and CPU times of reconstructions for by four different algorithms with three different sampling ratios.
Table 3. The SNR and CPU times of reconstructions for by four different algorithms with three different sampling ratios.
Figure 3. The SNR change of reconstructions by four different algorithms for as the sampling ratio increases from 20 to 90%.
![Figure 3. The SNR change of reconstructions by four different algorithms for X1 as the sampling ratio increases from 20 to 90%.](/cms/asset/5ba0e8ba-1b07-4a9b-8e63-911ee467f2f2/gipe_a_890616_f0003_oc.gif)
Figure 4. Left column: original data (a) and
(c). Right column: (b) and (d) are the corresponding corrupted data with 50% traces missing for
and
, respectively.
![Figure 4. Left column: original data X2 (a) and X3 (c). Right column: (b) and (d) are the corresponding corrupted data with 50% traces missing for X2 and X3, respectively.](/cms/asset/63191a02-f0b0-4ef0-a114-88dfb16bf175/gipe_a_890616_f0004_b.gif)
Figure 5. Reconstructions for . (a)–(d): Reconstructed eighth single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the eighth trace comparison, while the right-half part is the corresponding reconstruction error comparison.
![Figure 5. Reconstructions for X2. (a)–(d): Reconstructed eighth single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the eighth trace comparison, while the right-half part is the corresponding reconstruction error comparison.](/cms/asset/1d5079fa-44e8-47b5-891f-4e4718418212/gipe_a_890616_f0005_oc.gif)
Figure 6. Reconstructions for . (a)–(d): Reconstructed central single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the central trace comparison, while the right-half part is the corresponding reconstruction error comparison.
![Figure 6. Reconstructions for X3. (a)–(d): Reconstructed central single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the central trace comparison, while the right-half part is the corresponding reconstruction error comparison.](/cms/asset/dcc45578-0093-4536-84e0-6b0f6aca30d8/gipe_a_890616_f0006_oc.gif)
Figure 7. The SNR change of reconstructions by four different algorithms ((a) for and (b) for
) as the sampling ratio increases from 20 to 90%.
![Figure 7. The SNR change of reconstructions by four different algorithms ((a) for X2 and (b) for X3) as the sampling ratio increases from 20 to 90%.](/cms/asset/12794cd4-2645-4b3a-96a7-63102fe3e61c/gipe_a_890616_f0007_oc.gif)
Table reports the SNR values and CPU times of reconstructions by four different algorithms under three noise levels (i.e. ,
and
). It can be found that the high-quality reconstruction can be achieved by the weighted low-rank methods, particularly in terms of higher SNR values, but paying the cost of computational time. And as an adaptively chosen weights, WISD is faster than WSST and offers more competitive reconstructions. Moreover, we can also see that the SNR decreases significantly for the higher noise level, in comparison with the noise-free case (
). Figure displays the SNR change of reconstructions by four different algorithms under the noise level
as the sampling ratio increases from 20 to 90%. The increasing sampling measurement indeed leads to improved performance. It is also obvious that the SNR values of reconstructions produced by the proposed weighted low-rank model are higher than those with the standard nuclear-norm minimization model.
In the next experiment, we test our proposed method on two real poststack seismic data. The first data referred to is a rather simple data with the size of
, shown in Figure (a). The second one referred to
is a more complex data with the size of
, shown in Figure (c). The corrupted version with 50% randomly missing traces for data-set
and
are shown in Figure (b) and (d), respectively. For
, we take in all our computations
with both WSST and WISD algorithms, while using
for
. The reconstructed results by IST, LMaFit, WSST and WISD are depicted in Figures and , respectively. It can be seen clearly from the right-half part of Figures and that the reconstructions by the weighted low-rank method have smaller single trace errors, which illustrates the good performance of the weighted low-rank methods.
In addition, we report the SNR values and CPU times of reconstructions from and
with three different sampling ratios, which are summarized in Tables and , respectively. The results also indicate that the weighted low-rank model indeed improves the SNR values of reconstructions but paying the cost of computational time. However, we find that the SNR values of the reconstructions for
become worse than that for
due to more complex structures.
Finally, Figure displays the SNR change of reconstructions by four different algorithms for and
as the sampling ratio increases from 20 to 90%. In (a), we plot the SNR change for
. In (b), we plot the SNR change for
.
5 Conclusion
In this paper, we study the reconstruction of the seismic data with randomly missing traces. We have presented a weighted nuclear-norm minimization model based on one texture-patch pre-transformation to improve the reconstruction. Due to its non-convexity, we define a new approximate estimator model formulation by adding a ridge term to the weighted nuclear-norm penalization. Algorithmically, we introduce two efficient algorithms (WSST and WISD) to approximate the minimizer of this weighted model. The performance of these two algorithms are first tested on a synthetic prestack seismic data for the noisy and corrupted case. In addition, we also test our proposed method to reconstruct two real post-stack seismic data. Results show that the proposed method works efficiently and performs better in comparison to a traditional algorithm and a state-of-the-art low-rank factorization algorithm. How to extend the weighted matrix completion methods for aliasing data with regularly missing traces is the next work.
Notes
This work is supported by NSFC [grant number 91330108], [grant number 41374121], [grant number 61327013], [grant number 91230119]; the Fundamental Research Funds for the Central Universities [grant number HIT.BRETIV.201314] and the Program for New Century Excellent Talents in University [grant number NCET-11-0804].
References
- Kabir M, Verschuur D. Restoration of missing offsets by parabolic radon transform. Geophys. Prospect. 1995;43:347–368.
- Xu S, Zhang Y, Pham D, Lambarse G. Antileakage Fourier transform for seismic data regularization. Geophysics. 2005;70:87–95.
- Naghizadeh M, Innanen K. Seismic data interpolation using a fast generalized Fourier transform. Geophysics. 2011;76:1–10.
- Herrmann F, Hennenfent G. Non-parametric seismic data recovery with curvelet frames. Geophys. J. Int. 2008;173:233–248.
- Shahidi R, Tang G, Ma J, Herrmann F. Application of randomized sampling schemes to curvelet-based sparsity-promoting seismic data recovery. Geophys. Prospect. 2013;61:973–997.
- Trickett S, Burroughs L, Milton A, Walton L, Kelman R. Rank-reduction-based trace interpolation. SEG 80th Annual International Meeting. Expanded Abstracts; 2010; Denver, CO. p. 3829–3833.
- Oropeza V, Sacchi M. Simultaneous seismic data denoising and reconstruction via multichannel singular spectrum analysis. Geophysics. 2011;76:25–32.
- Naghizadeh M, Sacchi M. Multidimensional de-aliased Cadzow reconstruction of seismic records. Geophysics. 2013;78:1–5.
- Yang Y, Ma J, Osher S. Seismic data reconstruction via matrix completion. Inverse Prob. Imaging. 2013;7:1379–1392.
- Ma J. Three-dimensional irregular seismic data reconstruction via low-rank matrix completion. Geophysics. 2013;78:181–192.
- Candes E, Recht B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009;9:717–772.
- Candes E, Tao T. The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inform. Theory. 2010;56:2053–2080.
- Fazel M. Matrix rank minimization with applications [PhD thesis]. Stanford (CA): Stanford University; 2002.
- Cai JF, Candes E, Shen ZW. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010;20:1956–1982.
- Toh KC, Yun S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 2010;6:615–640.
- Yang J, Yuan X. Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 2013;82:301–329.
- Ma S, Goldfarb D, Chen L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Prog. 2011;128:321–353.
- Wen Z, Yin W, Zhang Y. Solving a low rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Prog. Comput. 2012;4:333–361.
- Gaiffas S, Lecue G. Weighted algorithms for compressed sensing and matrix completion, arXiv:1107.1638; Forthcoming 2011.
- Keshavan R, Montanari A, Oh S. Matrix completion from a few entries. IEEE Trrans. Inform. Theory. 2010;56:2980–2998.
- Cross D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trrans. Inform. Theory. 2011;57:1548–1566.
- Schaeffer H, Osher S. A low patch-rank interpretation of texture. SIAM J. Imaging Sci. 2013;6:226–262.
- Wang Y, Yin W. Sparse signal reconstruction via iterative support detection. SIAM J. Imaging Sci. 2010;3:462–491.
- Majumdar A. Matlab code for IST_MC.m. Available from: http://www.mathworks.com/matlabcentral/fileexchange/26395-matrix-completion-via-thresholding.