450
Views
11
CrossRef citations to date
0
Altmetric
Articles

Seismic data reconstruction via weighted nuclear-norm minimization

, , &
Pages 277-291 | Received 17 Oct 2013, Accepted 29 Jan 2014, Published online: 14 Mar 2014

Abstract

The low-rank matrix completion theory based on nuclear-norm minimization is becoming increasingly popular in practice, and has been applied to seismic data reconstruction of missing traces. In this paper, we investigate the weighted nuclear-norm minimization model on low-rank seismic texture matrix which is obtained with a designed texture-patch pre-transformation. Unlike the previous nuclear-norm model that treats all singular values of the seismic data matrix equally, the weighted nuclear-norm model, based on the idea of iterative reweighting, makes a closer approximation to the ‘counting rank function’ and can promote the low-rank matrices. We apply weighted spectral soft-thresholding iterative scheme and weighted iterative supported detection method to solve the weighted model. Experimental study on synthetic and real seismic data shows that high-quality reconstruction can be obtained by the weighted low-rank methods.

1 Introduction

Seismic data are often irregularly or sparsely sampled along spatial coordinates, which is caused by the presence of surface obstacles, dead or severely contaminated traces, instrument, no permit area and economic constraints. Reconstruction and interpolation of missing data in seismic records is a critical element of the data processing chain, and has become a main issue in seismic research. The reconstruction quality may directly affect the subsequent processing steps such as migration, multiple suppression and amplitude vs. azimuth analysis.

Many different techniques have been proposed to reconstruct missing traces. In the last several years, sparsity-promoting l1-norm minimization methods have been successfully applied to seismic data reconstruction of missing traces. They commonly require the sparse representation of the data in a transformation domain, such as Radon transform,[Citation1] Fourier transform [Citation2, Citation3] and Curvelet transform.[Citation4, Citation5]

Besides the sparsity-promoting method, rank-reduction methods have been proposed for the seismic data reconstruction with missing traces. The rank-reduction methods mainly fail in two different categories: Cadzow filtering and nuclear-norm minimization. The Cadzow methods first transfer the data into frequency-space ( f-x) domain, then generate a Hankel matrix from the spatial samples for each frequency, then finally perform a rank-reduced eigen-decomposition of the Hankel matrix. This method guarantees that Hankel matrix is low rank for seismic data with linear events. However, the method pays huge computational cost because one has to repeatedly perform the Hankel matrix for every frequency. The Cadzow method was first used for seismic interpolation by Trickett et al. [Citation6], and then extended to high dimensional [Citation7] and de-aliased reconstruction.[Citation8]

The low-rank matrix completion theory based on nuclear-norm minimization is becoming increasingly popular in practice, and has been applied to seismic data reconstruction with missing traces. Yang et al. [Citation9] discovered the connection between seismic data reconstruction and the general matrix completion problem, and first formulated the seismic data reconstruction as a nuclear-norm minimization problem through building a pre-transformation. Recently, Ma [Citation10] extended the low-rank matrix completion with a designed texture-patch pre-transformation to three-dimensional seismic data reconstruction. The matrix completion problem [Citation11, Citation12] of finding a lowest rank matrix from a subset of its entries is one important special case for recovering a low-rank matrix in many applications. But the matrix completion model is generally NP-hard due to the combinational nature of the rank function. As known, the nuclear-norm minimization is a convex relaxation for the original problem of rank minimization.[Citation13] Then, the matrix completion problem can be partially justified through a proper nuclear-norm minimization model. Nowadays, many efficient algorithms have been proposed to solve the nuclear-norm minimization, including singular value thresholding algorithm (IST or SVT) [Citation14] using soft-thresholding operations on the singular values of a certain matrix at each iteration, accelerated proximal gradient method [Citation15] based on a fast iterative shrinkage-thresholding algorithm for compressed sensing, the augmented Lagrangian and alternating direction methods [Citation16] and fixed-point continuation with Bregman iteration.[Citation17] All of these algorithms bear the computational cost required by singular value decompositions at each iteration, which is very expensive particularly for large-scale problems. Generally, we can apply the low-rank factorization model (low-rank matrix fitting [LMaFit]) [Citation18] mainly based on the idea of non-linear successive over-relaxation algorithm, which avoids the singular value decomposition at each iteration. This method is usually much faster than the singular value decomposition based methods.[Citation9]

From an algorithmic viewpoint, one central idea of the above mentioned most solution methods is the convex relaxation of the l0-functional (the function giving the number of non-zero elements of a vector) and of the rank function. A weak point of the nuclear-norm minimization algorithms is that it treats all singular values of the sampled matrix equally. Our work is inspired by [Citation19] that extended the principle of iterative weighting of l1-norm into the matrix completion problem. Since the weighted nuclear-norm minimization model, based on the idea of iterative reweighting, makes a closer approximation to the ‘counting rank function’ when the chosen weights are close to the singular values of the matrix. Thus, it promotes lower rank matrices.

In this paper, we also consider two-dimensional seismic data reconstruction with missing traces. We follow Yang et al’s method [Citation9] that uses the texture-patch mapping, but consider the weighted nuclear-norm minimization model to complete the low-rank texture-patch matrix in order to obtain the high-quality reconstruction. However, the weighted nuclear-norm is generally non-convex. We define a new approximate model formulation by adding a ridge term to the weighted nuclear-norm penalization. Algorithmically, we introduce weighted spectral soft-thresholding iterative scheme (WSST) and weighted iterative supported detection method (WISD) to approximate the minimizer of the resulting weighted model. It is compared numerically on seismic data reconstruction to the existing algorithms such as IST [Citation14] and LMaFit.[Citation18]

This paper is organized as follows. In Section 2, we give basic definition on the matrix completion and introduce a brief review on the construction of low-rank texture-patch matrix. Section 3 presents a weighted nuclear-norm minimization model and provides two efficient algorithms to approximate this weighted model. Numerical experiments are carried out to evaluate the performance of our proposed methods for seismic data reconstruction in Section 4. Finally, we draw conclusions in Section 5.

2 Matrix completion and pre-transformation

Matrix completion problem can be understood as a generalization of compressed sensing of vectors, which has been attracting a lot of researchers quite recently, see [Citation11, Citation12, Citation20, Citation21]. The aim is to fill in the missing values of a matrix from sufficient number of known entries with the assumption that it is of low-rank. According to the matrix completion theory, one can recover the randomly missing data by solving the following rank-minimization model:2.1 minXrank(X),s.t.Xij=Mij,(i,j)Ω,2.1 where XRm×n is the original seismic data that we want to reconstruct, MRm×n is not fully observed/sampled seismic data with missing traces, rank(X) stands for the rank of the matrix X (i.e. the number of non-zero singular values of matrix X) and Ω denotes an index subset corresponding to the observed entries with cardinality k.

We define the masking operator PΩ:Rm×nRm×n as follows:[PΩ(X)]ij=Xij,(i,j)Ω0,otherwise.Here, PΩ stands for the projection on the observation subspace of matrices with non-zeros restricted to index subset Ω. Usually noise is unavoidable in the measurements, so the above model (Equation2.1) is often relaxed to the nuclear-norm regularized unconstraint optimization problem:2.2 minXλrank(X)+12PΩ(X)-PΩ(M)F2,2.2 where ·F2 is the Frobenius norm, i.e. the square root of the sum of squares of the elements, λ>0 is a properly tuned parameter.

However, the above models are unfortunately NP-hard and known to be very hard to solve in practice even for small matrices, which can stem from non-convexity and discontinuity of the rank function. As known, the nuclear-norm (i.e. the l1-norm of the singular values), which we denote by ·, is the convex envelope of the rank function over the unit ball of the operator norm, and is a widely used surrogate of the rank function to induce low-rank solutions for many optimization problems with matrix variables. Then, the above problems can be efficiently solved under suitable assumptions on PΩ by considering their convex relaxations:2.3 minXX,s.t.Xij=Mij,(i,j)Ω,2.3 where X=j=1mnσj(X), σj(X) denotes the jth largest singular value of X and σ1(X)σmn(X) are in decreasing order, or2.4 minXλX+12PΩ(X)-PΩ(M)F2.2.4 In the following, the vector of the singular values of the matrix X is denoted by σ(X)=(σ1(X),,σmn(X)), sorted in non-increasing order. It has been shown by Candes and Retch [Citation11] that the above convex relaxations can exactly recover the original sparse and low-rank matrix under quite general conditions. It will be found that the relationship between the nuclear-norm and the rank function of a matrix is analogical to the relationship between l1-norm and l0-norm of a vector in the area of sparse signal recovery. The solution to the regularized version (Equation2.4) is given by the iterative singular value soft-thresholding scheme (IST):2.5 Xk+1=SλXk-PΩ(Xk)+PΩ(M),2.5 where Sλ is the soft-thresholding operator defined for every BRm×n by applyingSλ(t)=sgn(t)max(|t|-λ,0),to the singular values of B=UΣVT, i.e.Sλ(B)=UΣλVT,with a diagonal matrix Σλ=diag(max(σ1(B)-λ,0),,max(σmn(B)-λ,0)).

In the compressed sensing framework, it is known that one necessary condition for successfully applying the sparsity-promoting inversion method to improve the recovery is that the signal or image of interest is sparse or compressible in the transformation domain. Analogously, for the matrix completion framework, since we consider the case where a observed number k of entries is comparably much smaller than the total number mn of entries, the matrix completion problem is in general severely ill-posed. In order to obtain a meaningful result, one needs to impose a sparsity assumption on the original data X, which is done by assuming that X is of low rank. If the original data is not of low rank, we should find a pre-transformation to organize the original data such that it leads to a low-rank presentation.

In this paper, we work with the original data using the so-called texture transformation, which is defined as base texture [Citation22] and applied to seismic data recovery in [Citation9]. Below two are the motivations to use the texture transformation. First, the seismic sampling matrix with missing traces results in some ‘zero’ columns, but according to the matrix completion theory, the entries must be sufficiently and uniformly distributed, that is to say, one needs at least one non-zero entry per column to have any hope of achieving successful reconstruction. The texture transformation can reshape the original data with missing traces to a new format with missing pixels, avoiding these ‘zero’ columns. Second, after texture transformation the new matrix has low rank. In the following, we briefly describe the texture transformation by defining a texture-patch matrix. For the given original data XRm×n, we first construct its block matrix H consisting of the sub-matrices BjRr×r as follows.[Citation22]H=B1B2Bn/rBn/r+1Bn/r+2B2n/rB(mn)nr+1B(mn)nr+2Bmnr2.Next, all columns of Bj are rearranged into one column vector with the length r2, denoted by wj. Then, the texture-patch matrix is defined as followsX~:=PH(X)=[w1,w2,,wmn/r2]Rr2×(mn/r2),where the operator PH:Rm×nRr2×(mn/r2) is called the patch mapping for recognizing a new matrix. The obtained texture-patch matrix X~ is with lower rank than X itself.[Citation22] We can reconstruct X by recovering X~ with the proper matrix completion algorithms.

We first reconstruct X~ with the following matrix completion model:2.6 minX~λX~+12PH(PΩ(X))-PH(PΩ(M))F2,2.6 and then obtain the reconstructed original matrix X by inverse transformation X=PH-1(X~).

Denoting by P~Ω(·)=PH(PΩ(·)) the reorganized mapping, (Equation2.6) can be formulated as2.7 minX~λX~+12P~Ω(X~)-P~Ω(M~)F2.2.7 For the sake of writing simplicity, in the next section we omit the blow.

3 The weighted model for matrix completion

In this section, we consider the weighted nuclear-norm minimization model: for a given weight vector ω=(ω1,ω2,,ωmn) with ω1ωmn>0,3.1 minXλX,ω+12PΩ(X)-PΩ(M)F2,3.1 where X,ω denotes the weighted sum of singular values of X, e.g. X,ω=j=1mnσj(X)ωj, here named by the weighted nuclear-norm.

Since the weighting aim is to promote low-rank matrices, the weight vector ω should be chosen non-increasingly. Assuming that X^ is a solution of (Equation2.4), now we would like to use the idea of reweighting with the weighting estimates for instanceωj=σj(X^),and find a solution to the problem (Equation3.1) for this choice of weights. We note that, when the weight vector ω is close to σ(X), X,ω is close to rank(X). That is to say, this weighting makes a closer approximation to the ‘counting rank function’. So it can promote the low-rank matrices and will allow us to obtain better reconstructed results. Unfortunately, let us stress the fact that, while we call ·,ω the weighted nuclear-norm, it is indeed not a norm since it is typically non-convex in general, leading that (Equation3.1) cannot be used easily as an objective function. Consequently, (Equation3.1) is not a convex minimization problem in general, and a minimization algorithm is very likely to admit several locally optimal solutions. Hence there are no efficient methods to solve it. In order to alleviate the problem of local minima and to obtain a meaningful solution, the first idea that may come to mind is to use a convex relaxation of non-convex function ·,ω ( just as convex relaxation of the rank function to the nuclear-norm). Then a ridge term is added to the weighted nuclear-norm penalization, and a new approximate estimator model formulation is defined as follows:3.2 minX12PΩ(X)-PΩ(M)F2+λX,ω+τ2XF2,3.2 for any τ0. The minimizer of (Equation3.2) exists and is unique this time (more details see Proposition 2 of [Citation19]), since the objective function is now strongly convex.

Proposition 2

[Citation19]   Let BRm×n, λ,τ>0 and w1wmn>0. Then the minimization problemminARm×n12A-BF2+λj=1mnσj(A)wj+τ2AF2has a unique solution, given by 11+τSλw(B), where Sλω(B)=UΣλωVT denotes the weighted soft-shrinkage operator withΣλω=diagmaxσ1(B)-λωj,0,,maxσrank(B)(B)-λωj,0.

Hence the solution of (Equation3.2) can be formulated as the following iteration scheme:3.3 Xk+1=11+τSλw(Xk-PΩ(Xk)+PΩ(M)).3.3 The parameter τ>0 can be arbitrarily small, which ensures the uniqueness and convergence of the iterative scheme (Equation3.3). It needs emphasizing the fact that the solution by the iterative scheme (Equation3.3) with τ=0 is not equivalent to the solution of problem (Equation3.1), since the weighted nuclear-norm ·,ω is not convex.

In the following, two effective algorithms are described to solve this convex relaxation weighted model (Equation3.2) in details to approximate the solution of (Equation3.1), starting from a given initial approximation. One is the WSST proposed by Gaiffas et al. [Citation19]. The algorithm is outlined in Algorithm 1. We first obtain a solution of the nuclear-norm minimization problem (Equation2.4), denoted by X^. Then we update the weights by taking ω=σ(X^), and we start all over. We use the iteration scheme (Equation3.3) to find the approximate solution with only repeating the process of updating the weights. Through this process, we are typically going to get the goal of decreasing the final rank of WSST algorithm, while keeping a good reconstruction accuracy.

The other one is the WISD algorithm outlined in Algorithm 2, which is regarded as an iterative weighted algorithm with a non-uniform weighting scheme for dealing with (Equation3.2). It can be seen that the difference between WSST and WISD algorithms lies in the use of adaptively changed weights according to ‘the first jump rule’.[Citation23] In fact, at iteration k+1, the singular values σ(Xk+1) are sorted in ascending order, and then we compute the difference between each two adjacent elements. The rule looks for the smallest i such thatσi+1(Xk+1)-σi(Xk+1)>η,with η=σ(Xk+1)rank(X). This amounts to looking for the first jump larger than η. We update the thresholding μ=σi(Xk+1) with ‘the first jump rule’, and then update the detected support Λ={j:σj(Xk+1)>μ}. Moreover, we choose for the weights the simple relationship 1/μ for the singular values in ΛC. This choice guarantees that the weights for those singular values in Λ are smaller than weights for singular values in ΛC. The advantage of this choice is that the weights for all singular values are adaptively changed.

Note that in Algorithms 1 and 2 the target value of λ is simply taken as λtarget=εP(M), with ε=10-6 or ε=10-3 depending on the problem. We use the iterations (Equation3.3) with relatively smaller τ, i.e. τ<10-6, otherwise the problem will drop into local minimum easily. We use a simple stopping rule Xk+1-Xk2/Xk2tol with tol=10-5 or tol=10-3 also depending on the scaling of the problem.

4 Numerical results

In this section, we test the above proposed weighted model with WSST and WISD algorithms on one synthetic single-short prestack seismic data and two real poststack seismic data, in comparison with the nuclear-norm minimization model with the singular value soft-thresholding algorithm (IST) [Citation24] and the LMaFit.[Citation18] In this paper, we only consider random irregular missing traces on regular grids. The jittered sampling pattern [Citation5] is used for the location of missing traces. The corrupted version with 50% randomly missing traces for seismic data-set is used. The computational parameters are taken empirically to achieve best results in all our tests.

We define the signal-to-noise ratio (SNR):SNR=10·log(XF2X-XF2)to estimate the reconstruction quality, where X and X denote the original data and its reconstruction, respectively. As another performance measure we also compare the reconstruction of one single trace, i.e. one column of the data matrix. Therefore, in the next we will present the single trace of the original data that is missed in observed data (in red dashed line) and the same trace of reconstruction by IST, LMaFit, WSST and WISD (in blue solid line).

One synthetic single-short prestack seismic data with the size of 512×512, denoted by X1 and shown in Figure (a), is firstly tested to show the performance of our approach. The corrupted data with 50% randomly missing traces for X1 is shown in Figure (b). And white Gaussian noise is considered to be added to the sampling data. Let δ denote the noise level. For X1, we take q=0.02 for both WSST and WISD algorithms. Since the visual difference in the reconstructed images is not obvious, we omit presenting the reconstructed images. In the left-half part of Figure , we show the comparison of the 100th trace taken from the original data and the reconstructions for the noise-free case (δ=0) by different algorithms. (a)–(d) are obtained via IST, LMaFit, WSST and WISD, respectively. To further illustrate the reconstructed results more clearly, the right-half part in each subfigure presents the corresponding single trace error, i.e. the difference between the 100th trace of the original data and the same trace of reconstruction. It is clear from the right-half part of Figure that the improvement by the weighted low-rank methods (i.e. WSST and WISD) can be seen obviously.

Figure 1. (a) Original seismic data X1. (b) Corrupted data with 50% missing traces.

Figure 1. (a) Original seismic data X1. (b) Corrupted data with 50% missing traces.

Figure 2. Reconstructions for X1 under δ=0. (a)–(d): Reconstructed 100th single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the 100th trace comparison, while the right-half part is the corresponding reconstruction error comparison.

Figure 2. Reconstructions for X1 under δ=0. (a)–(d): Reconstructed 100th single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the 100th trace comparison, while the right-half part is the corresponding reconstruction error comparison.

Table 1. The SNR and CPU times of reconstructions for X1 by four different algorithms with three different noise levels.

Table 2. The SNR and CPU times of reconstructions for X2 by four different algorithms with three different sampling ratios.

Table 3. The SNR and CPU times of reconstructions for X3 by four different algorithms with three different sampling ratios.

Figure 3. The SNR change of reconstructions by four different algorithms for X1 as the sampling ratio increases from 20 to 90%.

Figure 3. The SNR change of reconstructions by four different algorithms for X1 as the sampling ratio increases from 20 to 90%.

Figure 4. Left column: original data X2 (a) and X3 (c). Right column: (b) and (d) are the corresponding corrupted data with 50% traces missing for X2 and X3, respectively.

Figure 4. Left column: original data X2 (a) and X3 (c). Right column: (b) and (d) are the corresponding corrupted data with 50% traces missing for X2 and X3, respectively.

Figure 5. Reconstructions for X2. (a)–(d): Reconstructed eighth single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the eighth trace comparison, while the right-half part is the corresponding reconstruction error comparison.

Figure 5. Reconstructions for X2. (a)–(d): Reconstructed eighth single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the eighth trace comparison, while the right-half part is the corresponding reconstruction error comparison.

Figure 6. Reconstructions for X3. (a)–(d): Reconstructed central single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the central trace comparison, while the right-half part is the corresponding reconstruction error comparison.

Figure 6. Reconstructions for X3. (a)–(d): Reconstructed central single traces via IST, LMaFit, WSST and WISD, respectively. Left-half part is the central trace comparison, while the right-half part is the corresponding reconstruction error comparison.

Figure 7. The SNR change of reconstructions by four different algorithms ((a) for X2 and (b) for X3) as the sampling ratio increases from 20 to 90%.

Figure 7. The SNR change of reconstructions by four different algorithms ((a) for X2 and (b) for X3) as the sampling ratio increases from 20 to 90%.

Table reports the SNR values and CPU times of reconstructions by four different algorithms under three noise levels (i.e. δ=0, δ=0.01 and δ=0.03). It can be found that the high-quality reconstruction can be achieved by the weighted low-rank methods, particularly in terms of higher SNR values, but paying the cost of computational time. And as an adaptively chosen weights, WISD is faster than WSST and offers more competitive reconstructions. Moreover, we can also see that the SNR decreases significantly for the higher noise level, in comparison with the noise-free case (δ=0). Figure displays the SNR change of reconstructions by four different algorithms under the noise level δ=0 as the sampling ratio increases from 20 to 90%. The increasing sampling measurement indeed leads to improved performance. It is also obvious that the SNR values of reconstructions produced by the proposed weighted low-rank model are higher than those with the standard nuclear-norm minimization model.

In the next experiment, we test our proposed method on two real poststack seismic data. The first data referred to X2 is a rather simple data with the size of 512×512, shown in Figure (a). The second one referred to X3 is a more complex data with the size of 512×512, shown in Figure (c). The corrupted version with 50% randomly missing traces for data-set X2 and X3 are shown in Figure (b) and (d), respectively. For X2, we take in all our computations q=0.01 with both WSST and WISD algorithms, while using q=0.02 for X3. The reconstructed results by IST, LMaFit, WSST and WISD are depicted in Figures and , respectively. It can be seen clearly from the right-half part of Figures and that the reconstructions by the weighted low-rank method have smaller single trace errors, which illustrates the good performance of the weighted low-rank methods.

In addition, we report the SNR values and CPU times of reconstructions from X2 and X3 with three different sampling ratios, which are summarized in Tables and , respectively. The results also indicate that the weighted low-rank model indeed improves the SNR values of reconstructions but paying the cost of computational time. However, we find that the SNR values of the reconstructions for X3 become worse than that for X2 due to more complex structures.

Finally, Figure displays the SNR change of reconstructions by four different algorithms for X2 and X3 as the sampling ratio increases from 20 to 90%. In (a), we plot the SNR change for X2. In (b), we plot the SNR change for X3.

5 Conclusion

In this paper, we study the reconstruction of the seismic data with randomly missing traces. We have presented a weighted nuclear-norm minimization model based on one texture-patch pre-transformation to improve the reconstruction. Due to its non-convexity, we define a new approximate estimator model formulation by adding a ridge term to the weighted nuclear-norm penalization. Algorithmically, we introduce two efficient algorithms (WSST and WISD) to approximate the minimizer of this weighted model. The performance of these two algorithms are first tested on a synthetic prestack seismic data for the noisy and corrupted case. In addition, we also test our proposed method to reconstruct two real post-stack seismic data. Results show that the proposed method works efficiently and performs better in comparison to a traditional algorithm and a state-of-the-art low-rank factorization algorithm. How to extend the weighted matrix completion methods for aliasing data with regularly missing traces is the next work.

Notes

This work is supported by NSFC [grant number 91330108], [grant number 41374121], [grant number 61327013], [grant number 91230119]; the Fundamental Research Funds for the Central Universities [grant number HIT.BRETIV.201314] and the Program for New Century Excellent Talents in University [grant number NCET-11-0804].

References

  • Kabir M, Verschuur D. Restoration of missing offsets by parabolic radon transform. Geophys. Prospect. 1995;43:347–368.
  • Xu S, Zhang Y, Pham D, Lambarse G. Antileakage Fourier transform for seismic data regularization. Geophysics. 2005;70:87–95.
  • Naghizadeh M, Innanen K. Seismic data interpolation using a fast generalized Fourier transform. Geophysics. 2011;76:1–10.
  • Herrmann F, Hennenfent G. Non-parametric seismic data recovery with curvelet frames. Geophys. J. Int. 2008;173:233–248.
  • Shahidi R, Tang G, Ma J, Herrmann F. Application of randomized sampling schemes to curvelet-based sparsity-promoting seismic data recovery. Geophys. Prospect. 2013;61:973–997.
  • Trickett S, Burroughs L, Milton A, Walton L, Kelman R. Rank-reduction-based trace interpolation. SEG 80th Annual International Meeting. Expanded Abstracts; 2010; Denver, CO. p. 3829–3833.
  • Oropeza V, Sacchi M. Simultaneous seismic data denoising and reconstruction via multichannel singular spectrum analysis. Geophysics. 2011;76:25–32.
  • Naghizadeh M, Sacchi M. Multidimensional de-aliased Cadzow reconstruction of seismic records. Geophysics. 2013;78:1–5.
  • Yang Y, Ma J, Osher S. Seismic data reconstruction via matrix completion. Inverse Prob. Imaging. 2013;7:1379–1392.
  • Ma J. Three-dimensional irregular seismic data reconstruction via low-rank matrix completion. Geophysics. 2013;78:181–192.
  • Candes E, Recht B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009;9:717–772.
  • Candes E, Tao T. The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inform. Theory. 2010;56:2053–2080.
  • Fazel M. Matrix rank minimization with applications [PhD thesis]. Stanford (CA): Stanford University; 2002.
  • Cai JF, Candes E, Shen ZW. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010;20:1956–1982.
  • Toh KC, Yun S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 2010;6:615–640.
  • Yang J, Yuan X. Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 2013;82:301–329.
  • Ma S, Goldfarb D, Chen L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Prog. 2011;128:321–353.
  • Wen Z, Yin W, Zhang Y. Solving a low rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Prog. Comput. 2012;4:333–361.
  • Gaiffas S, Lecue G. Weighted algorithms for compressed sensing and matrix completion, arXiv:1107.1638; Forthcoming 2011.
  • Keshavan R, Montanari A, Oh S. Matrix completion from a few entries. IEEE Trrans. Inform. Theory. 2010;56:2980–2998.
  • Cross D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trrans. Inform. Theory. 2011;57:1548–1566.
  • Schaeffer H, Osher S. A low patch-rank interpretation of texture. SIAM J. Imaging Sci. 2013;6:226–262.
  • Wang Y, Yin W. Sparse signal reconstruction via iterative support detection. SIAM J. Imaging Sci. 2010;3:462–491.
  • Majumdar A. Matlab code for IST_MC.m. Available from: http://www.mathworks.com/matlabcentral/fileexchange/26395-matrix-completion-via-thresholding.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.