2,160
Views
92
CrossRef citations to date
0
Altmetric
Articles

Literature survey on low rank approximation of matrices

&
Pages 2212-2244 | Received 18 Aug 2014, Accepted 23 Nov 2016, Published online: 29 Dec 2016

References

  • Friedland S, Mehrmann V, Miedler A, et al. Fast low rank approximation of matrices and tensors. Electron J Linear Algebra. 2011;22:1031–1048.
  • Available from: http://perception.csl.illinois.edu/matrix-rank/home.html, 2012.
  • Elden L. Numerical linear algebra and applications in data mining. Preliminary version. 2005; Available from: www.mai.liu.se/~laeld
  • Skillicorn D. Understanding complex datasets. Data mining with matrix decompositions: Boca Raton: Chapman & Hall/CRC; 2007.
  • Available from: http://web.eecs.utk.edu/research/lsi/
  • Jolliffe IT. Principal component analysis. Springer series in statistics. 2nd ed. New York; 2002.
  • Park H, Elden L. Matrix rank reduction for data analysis and feature extraction. (Technical report, Tr 03-015). University of Minnesota.
  • Lee J, Kim S, Lebanon G, et al. Matrix approximation under local low rank assumption. 2013. arXiv:1301.3192.
  • Murphy KP. Machine learning: a probabilistic perspective. The MIT Press; 2012.
  • Ye J. Generalized low rank approximation of matrices. Mach Learn. 2005;61(1–3):167–191.
  • Espig M, Kishore Kumar N, Schneider J. A note on tensor chain approximation. Comput Visual Sci. 2012;15:331–344.
  • Grasedyck L, Kressner D, Tobler C. A literature survey of low rank tensor approximation techniques. 2013. arXiv:1302.7121.
  • Khoromskij BN. O(d logN)–quantics approximation of N – d tensors in high dimensional numerical meodelling. Constr Approx. 2011;34(2):257–280.
  • Khoromskij BN. Tensor numerical methods for higher dimensional partial differential equations: basic theory and Intial applications. ESAIM: Proc Surv. 2014;48:1–28.
  • Khoromskij BN. Tensor structured numerical methods in scientific computing: survey on recent advances. Chemom Intell Lab Syst. 2012;110:1–19.
  • Khoromskij BN, Khoromskaia V. Multigrid accerelated tensor approximation of function related multidimensional arrays. SIAM J Sci Comput. 2009;31(4):3002–3026.
  • Lathauver LD, De B. Moor, and. J. Vandewalle, A multilinear singular value decomposition, SIAM J Matrix Anal Appl. 2000;21:1253–1278.
  • Oseledets IV, Tyrtyshnikov EE. TT-cross approximation for multidimesnional arrays. Linear Algebra Appl. 2010;432(1):70–88.
  • Datta BN. Numerical linear algebra and applications. Philadelphia: SIAM; 2010.
  • Golub GH, Van Loan CF. Matrix computations. 4th ed. Baltimore: John Hopkins University Press; 2013.
  • Sundarapandian V. Numerical linear algebra. Prentice Hall of India Pvt. Ltd.
  • Trefethen LN, Bau D III. Numerical linear algebra. Philadelphia: SIAM.
  • Stewart GW. Matrix algorithms. Vol 1: basic decompositions. SIAM.
  • Chan TF. Rank revealing QR-Factorizations. Linear Algebra Appl. 1987;88(89):67–82.
  • Golub GH, Businger P. Linear least squares solutions by householder transformations. Numer Math. 1965;1:269–276.
  • Golub GH, Stewart GW, Klema V. Rank degeneracy and least squares problems, Technical report STAN-CS-76-559. Computer Science Department, Stanford University; 1976.
  • Gu M, Eisenstat SC. Efficient algorithms for computing a strong rank-revealing QR factorization. SIAM J Sci Comput. 1996;17(4):848–869.
  • Ari I, Cemgil AT, Akarun L. Probabilistic interpolative decomposition. IEEE International workshop on Machine learning and signal processing. 2012. p. 1–6.
  • Cheng H, Gimbutas Z, Martinsson P-G, et al. On the compression of low rank matrices. SIAM J Sci Comput. 2005;26(4):1389–1404.
  • Liberty E, Woolfe F, Martinsson PG, et al. Randomized algorithms for the low rank approximation of matrices. Proc National Acad Sci. 2007;104(51):20167–20172.
  • Lucas A, Stalzer M, Feo J. Parallel implementation of a fast randomized algorithm for the decomposition of low rank matrices. 2014; arXiv:1205.3830.
  • Deshpande A, Vempala S. Adaptive sampling and fast low-rank matrix approximation. Approximation, randomization and combinatorial optimization, Algorithms and techniques, Lecture notes in computer science. 2006;4110:292–303.
  • Halko N, Martinsson PG, Tropp JA. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 2011;53(2):217–288.
  • Kannan R, Vempala S. Spectral algorithms. Found Trends Theor Comput Sci. 2008;4(3–4):157–288.
  • Martinsson PG, Rokhlin V, Tygert M. A randomized algorithm for the decomposition of matrices. Appl Comput. Har. Anal. 2011;30:47–68.
  • Sarlos T. Improved approximation algorithms for large matrices via random projections. Proceedings of the 47th annual IEEE foundations of computer science (FOCS). 2006. p. 143–152.
  • Saap BJ. Randomized algorithms for low rank matrix decomposition. (Technical Report). May 2011. Computer and Information Science University of Pennsylvania.
  • Bebendorf M. Adaptive cross approximation of multivariate function. Constr Approx. 2011;34:149–179.
  • Gantmacher FR. Theory of matrices. New York (NY): Chelsea; 1959.
  • Goreinov SA, Tyrtyshnikov EE, Zamarashkin NL. Pseudoskeleton approximations of matrices. Rep Russ Acad Sci. 1995;342(2):151–152.
  • Tyrtyshnikov EE. Incomplete Cross approximation in the Mosaic-skeleton method. Computing. 2000;64(4):367–380.
  • Rajmanickam S. Efficient algorithms for sparse singular value decomposition [thesis]. University of Florida. Available from: http://www.cise.ufl.edu/srajaman/Rajamanickam\_S.pdf
  • Berry MW, Mezher D, Philippe B, et al. Parallel algorithms for singular value decomposition. Handbook on parallel computing and statistics. CRC Press; 2006. p. 117–164.
  • Hernandez V, Roman JE, Tomas A. A robust and efficient parallel SVD solver based on restarted Lanczos Bidiagonalization. Electron Trans Numer Anal. 2008;31:68–85.
  • Hernandez V, Roman JE, Tomas A. Restarted Lanczos bidiagonalization for the SVD in SLEPc. (Technical Report STR-8). 2007. Available from: http://slepc.upv.es
  • Simon HD, Zha H. Low rank matrix approximation using the Lanczos bidiagonalization process with applications. SIAM J Sci Comput. 2000;21(6):2257–2274.
  • Berry MW. Large scale sparse singular value decomposition. Int J Supercomputer Appl. 1992;6(1):13–49.
  • Berry MW. SVPACK: a FORTRAN 77 software library for the sparse singular value decomposition. Department of Computer Science, University of Tennessee. (Technical report: UT-CS-92-159). 1992.
  • Larsen RM. 2005. Lanczos bidiagonalization with partial reorthogonalization. Available from: http://soi.stanford.edu/~rmunk/PROPACK
  • Faddev DK, Kublanovskaja VN, Faddeeva VN. Sur les systemes lineares algebriques de matrices rectangularies et mal-conditionees [Linear algebraic systems with rectangular and ill-conditioned matrices]. Programmation en Mathematiques Numeriques, Editions Centre Nature Recherche Science, Paris, VII. 1968;161–170.
  • Foster L. Rank and null space calculations using matrix decomposition without column interchanges. Linear Algebra Appl. 1986;74:47–71.
  • Hong HP, Pan CT. Rank-revealing QR factorizations and singular value decomposition. Math Comput. 1992;58(197):213–232.
  • Chandrasekaran S, Ipsen ICF. On rank-revealing factorizations. SIAM J Matrix Anal Appl. 1994;15:592–622.
  • Boutsidis C, Mahoney MW, Drineas P. On selecting exactly k columns from a matrix. Manuscript. 2008.
  • Chan TF, Hansen PC. Some applications of the rank revealing QR factorization. SIAM J Sci Stat Comput. 1992;13(3):727–741.
  • Berry MW, Pulatova SA, Stewart GW. Computing sparse reduced-rank approximations to sparse matrices. ACM Trans Math Softw. 2005;32(2):252–269.
  • Stewart GW. Four algorithms for the efficient computation of truncated pivoted QR approximations to a sparse matrix. Numer Math. 1999;83:313–323.
  • Broadbent ME, Brown M, Penner K. Subset algorithms: randomized vs deterministic. SIAM undergraduate research online, 3, May. 2010. Faculty advisors: I.C.F. Ipsen and R. Rehman.
  • Boutsidis C, Mahoney MW, Drineas P. An improved approximation algorithm for the column subset selection problem. 2010. arXiv:0812.4293v2 [cs.DS] 12 May.
  • Civril A, Magdon-Ismail M. Column subset selection via sparse approximation of SVD. Theor Comput Sci. 2012;421:1–14.
  • Goreinov SA, Tyrtyshnikov EE. The maximal-volume concept in approximation by low-rank matrices. Contemp Math. 2001;208:47–51.
  • Goreinov SA, Tyrtyshnikov EE, Zamarashkin NL. A theory of pseudo-skeleton approximation. Linear Algebra Appl. 1997;261:1–21.
  • Martinsson PG, Rokhlin V, Shkolnisky Y, et al. ID: a software package for low rank approximation of matrices via interpolative decompositions. Version 0.3. 2013. Available from: http://cims.nyu.edu/tygert/id\_doc3.pdf
  • Compton R, Osher S. Hybrid regularization for MRI reconstruction with static field inhomogeneity correction. Inverse Prob Imaging. 2013;7(4):1215–1233.
  • Pan XM, Wei JG, Peng Z, et al. A fast algorithm for multiscale electromagnetic problems using interpolative decompositions and multilevel fast multipole algorithm. Radio Sci. 2012;47(1): RS1011.
  • Frieze A, Kannan R, Vempala S. Fast Monte-Carlo algorithms for finding low-rank approximations. J ACM. 2004;51(6):1025–1041.
  • Frieze A, Kannan R, Vempala S. Fast Monte-Carlo algorithms for finding low-rank approximations. The Proceedings of 39th Annual IEEE Symposium on Foundations of Computer Science. 1998. p. 370–378.
  • Deshpande A, Rademacher L, Vempala S, Wang G. Matrix approximation and projective clustering via volume sampling. Theory Comput. 2006;2:225–247.
  • Drineas P, Kannan R, Mahoney MW. Fast Monte Carlo algorithms for matrices II: computing a low rank approximation to a matrix. SIAM J Comput. 2006;36(1):158–183.
  • Deshpande A, Rademacher L. Efficient volume sampling for row/column subset selection. 2010. arXiv:1004.4057v1 [cs.DS].
  • Boutsidis C, Drineas P, Magdon-Ismail M. Near-optimal column-based matrix reconstruction. 2013. arXiv:1103.0995v3 [cs.DS] 21 Jun.
  • Guruswami V, Sinop K. Optimal column-based low-rank matrix reconstruction. 2012. arXiv:1104.1732v4 [cs.DS] 4 Jan.
  • Arai H, Maung C, Schweitzer H. Optimal column subset selection by a-star search. Proceedings of 29th AAAI conference on Artificial Intelligence. 2015: 1079–1085.
  • Drineas P, Mahoney MW, Muthukrishnan S. Subspace sampling and relative error matrix approximation: column based methods, APPROX and RANDOM 2006. LNCS. 2006;4110:316–326.
  • Farahat AK, Elgohary A, Ghodsi A, et al. Greedy column subset selection for large-scale data sets. 2013. arXiv:1312.6838.
  • Friedland S, Kaveh M, Niknejad A, et al. Fast Monte-Carlo low rank approximation for matrices. 2005. arXiv: math/0510573v.
  • Har-Peled S. Low rank approximation in linear time. 2014. arXiv:1410.8802 [CS.CG] 31 Oct.
  • Mahoney MW. Randomized algorithms for matrices and data. 2011. arXiv:1104.5557v3 [cs.DS].
  • Maung C, Schweitzer H. Pass-efficient unsupervised feature selection. Adv Neural Inf. Proc. Syst. 2013;26:1628–1636.
  • Pi Y, Peng H, Zhou S, et al. A scalable approach to column based low rank approximation. Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013. p. 1600–1606.
  • Rudelson M, Vershynin R. Sampling from large matrices: an approach through geometric functional analysis. J Assoc Comput Mach. 2007;54 p. 19.
  • Witten R, Candes E. Randomized algorithms for low-rank matrix factorizations: sharp performance bounds. 2013. arXiv:1308.5697v1 [cs.NA].
  • Woolfe F, Liberty E, Rokhlin V, Tygert Mark. A fast randomized algorithm for the approximation of matrices. Appl Comput Harmon Anal. 2008;25:335–366.
  • Civril A. Column Subset Selection for approximating data matrices [PhD thesis]. 2009. Available from: http://www.cs.rpi.edu/magdon/LFDlabpublic.html/Theses/civril_ali/AliRPIthesis.pdf
  • Clarkson KL, Woodruff DP. Numerical linear algebra in streaming model. Proceeding of 41st ACM sym on Symp Theory Comput. 2009;205–214.
  • Ghashami M, Philips JM. Relative errors for deterministic low rank matrix approximations. Aug 2013. rXiv: 1307.7454v2.
  • Ghashami M, Liberty E, Philips JM, Woodruff DP. Frequent directions: simple and deterministic matrix sketching. Apr 2015. arXiv:1501.01711v2.
  • Liberty E. Simple and deterministic matrix sketching. Proceedings of 19th ACM conference on knowledge, discovery and data mining. June 2012. arXiv:1206.0594
  • Woodruff DP. Low rank approximation lower bounds in row-update streams. Proceedings of the 27th Annual Conference on Advances in Neural Information Processing Systems. 2014. p. 1781–1789.
  • Drineas P, Kannan R, Mahoney MW. Fast Monte Carlo algorithms for matrices III: computing a compressed approximate matrix decomposition. SIAM J Comput. 2006;36:184–206.
  • Drineas P, Mahoney MW, Muthukrishnan S. Relative error CUR matrix decompositions. SIAM J Matrix Anal Appl. 2008;30:844–881.
  • Mahoney MW, Drineas P. CUR matrix decompositions for improved data analysis. Proc National Acad Sci. 2009;106(3):697–702.
  • Wang S, Zhang Z, Li J. A scalable CUR matrix decomposition algorithm: lower time complexity and tighter bound. 2012. arXiv:1210.1461.
  • Wang S, Zhang Z. Improving CUR matrix decomposition and the Nyström approximation via adaptive sampling. J Mach Learn Res. 2013;14:2549–2589.
  • Ari I, Simsekli U, Cemgil AT, et al. Large scale polyphonic music transcription using randomized matrix decompositions. Proceedings of Signal Processing Conference (EUSIPCO). 2012. p. 2020–2024.
  • Mitrovic N, Asif MT, Rasheed U, Dauwels J, Jaillet P. CUR decomposition for compression and compressed sensing of large-scale traffic data. Manuscript. Proc. of 16th Int. IEEE ANNUAL CONF. ON INTELL. TRAN. SYST, 2013.
  • Thurau C, Kersting K, Bauckhage C. Deterministic CUR for improved large-scale data analysis: an empirical study. Proceeding of 12th SIAM International Conference on Data Mining. 2012. p. 684–695.
  • Caiafa C, Cichocki A. Generalizing the column-row matrix decomposition to multi-way arrays. Linear Algebra Appl. 2010;433(3):557–573.
  • Dasgupta S, Gupta A. An elementary proof of the Johnson–Lindenstrauss lemma. Random Struct Algorithms. 2003;22(1):60–65.
  • Johnson WB, Lindenstrauss J. Extension of Lipschitz mapping into Hilbert spaces. Proceedings of modern analysis and probability. Vol. 26, Contemporary Mathematics. 1984. p. 189–206.
  • Achlioptas D. Database-friendly random projections. Proc ACM Symp Principles Database Syst. 2001;274–281.
  • Rokhlin V, Szlam A, Tygert M. A randomized algorithm for prinicipal component analysis. SIAM J Matrix Anal Appl. 31(3):1100–1124.
  • Gu M. Subspace iteration randomization and singular value problems. 2014. arXiv:1408.2208.
  • Nguyen NH, Do TT, Tran TT. A fast and efficient algorithm for low rank approximation of a matrix. STOC’09. 2009;215–224.
  • Papadimitriou CH, Raghavan P, Tamaki H, et al. Latent semantic indexing: a probabilistic analysis. J Comput Syst Sci. 2000;61(2):217–235.
  • Vempala S. The random projection method. AMS: DIMACS; 2004; 65.
  • Dehdari V, Deutsch CV. Applications of randomized methods for decomposing and simulating from large covariance matrices. Geostatistics Oslo. 2012;15–26:2012.
  • Drineas E, Drineas P, Huggins P. A randomized singular value decomposition algorithm for image processing applications. Proc. Panhellenic Conf. Informatics, Springer, 2001, 278–288.
  • Ji H, Li Y. GPU acceralated randomized singular value decomposition and its application in image compression. Proc. of Model. Simul. and Visu. Stu. Capstone Conf, Sulfolk, 2014.
  • Li M, Bi W, Kwok JT, Lu BL. Larger Nystrom kernel matrix approximation using randomized SVD. IEEE Trans Neural Networks Learning Syst. 2015;26(1):152–164.
  • Martinsson PG, Szlam A, Tygert M. Normalized power iterations for computation of SVD. NIPS workshop on low-rank methods for large-scale machine learning 2010. Vancouver, Canada.
  • Xiang H, Zou J. Regularization with randomized SVD for large scale discrete inverse problems. November 2013.
  • Zhang J, Erway J, Hu X, et al. Randomized SVD methods in hyperspectral imaging. J Electron Comput Eng. Article ID. 2012;409357.
  • Achlioptas D, McSherry F. Fast computing of low rank matrix approximations. J ACM. 2007;54:2.
  • Bhojanapalli S, Jain P, Sanghavi S. Tighter low rank approximation via sampling the leveaged element. 2014. arXiv:1410.3886.
  • Ubary Sh, Mazumdar A, Saad Y. Low rank approximation using error correcting coding matrices. Proceeding of 32nd international conference on Machine Learning. Vol. 37, JMLR: W & CP. 2015.
  • Menon AK, Elkan C. Fast algorithms for approximating the SVD. ACM Trans Knowl Discovery Data. 2011;5(2):13: 1–36.
  • Langville AN, Meyer CD, Albright R, et al. Algorithms, initializations and convergence for the non negative matrix factorization. Manuscript. arXiv: 1407.7299, 2014.
  • Paatero P, Tapper U. Positive matrix factorizations. A nonnegative factor model with optimal utilization of error estimates of data values, Environmetrics. 1994;5:111–126.
  • Lee DD, Seung HS. Learning the parts of objects by non negative factorization. Nature. 1999;401:788–791.
  • Lee DD, Seung HS. Algorithms for non negative matrix factorizations. Adv Neural Inf Proc Syst. 2001;13:556–562.
  • Lin CJ. Projected gradient method for non negative matrix factorization. Neural Comput. 2007;19(10):2756–2779.
  • Cichocki A, Zdunek R. Regularized alternating least squares algorithms for non negative matrix/Tensor factorization. Advances in neural networks, LNCS. 2007;4493:793–802.
  • Berry MW, Browne M, Langville AN, et al. Algorithms and applications for approximate non negative factorization. Comput Stat Data Anal. 2007; 52: 155–173.
  • Chi EC, Kolda TG. On tensors, sparsity and non negative factorizations. SIAM J Matrix Anal Appl. 2012;33(4):1272–1299.
  • Gillis N. The why and how of non negative matrix factorizations. Jan 2014. arXiv:1401.5226v1.
  • Kim J, Park H. Towards faster non negative matrix factorization: a new algorithms and comparisons. IEEE International Conference on Data Mining (ICDM). 2008. p. 353–362.
  • Kim J, Park H. Fast non negative matrix factorization. An active-set-like method and comparisons, SIAM J Sci Comput. 2011;33(6):3261–3281.
  • Flatz M. Algorithms for non negative tensor factorizations. University of Salzburg. 2013. (Technical report, 05).
  • Cichocki A, Zdunek R, Phan AH, et al. Non negative matrix and tensor factorizations. West-Sussex: Wiley; 2009.
  • Kolda TG, O’Leary DP. A semidiscrete matrix decomposition for latent sematic indexing in infection retrieval. ACM Trans Inf Syt. 1998;16(4):322–346.
  • O’Leary DP, Peleg S. Digital Image compression by outer product expansion. IEEE Trans Commun. 1983;31:441–444.
  • Kolda TG, O’ Leary DP. Computation and uses of the semi discrete matrix decomposition. 1999. (Technical Report CS-TR-4012 and UMIACS-TR-99-22). Department of Computer Science, University of Maryland, College Park M. D.
  • Luo Y. An improved semidiscrete matrix decompositions and its application in Chinese information retrieval. Appl Mech Mate. 2013;241–244:3121–3124.
  • Qiang W, XiaoLong W, Yi G. A study of semi discrete decomposition for LSI in automated text categorization. LNCS. 2005;3248:606–615.
  • Williams CKI, Seeger M. Using the Nyström method to speed up kernel machines. Advances in neural information processing systems 13 2000. MIT press; 2001.
  • Drineas P, Mahoney MW. On the Nyström method for approximating a Gram matrix for improved kernel-based learning. J Mach Learn Res. 2005;6:2153–2175.
  • Gittens A, Mahoney MW. Revisiting the Nystrom method for improved large scale machine learning. JMLR W CP. 2013;28(3):567–575. arXiv:1303.1849v2.
  • Talwalkar A, Rostamizadeh A. Matrix coherence and Nyström method. Proceedings of the 26th Conference in Uncertainty in Artificial Intelligence. 2010. arXiv: 1004.2008V1.
  • Zhang K, Tsang IW, Kwok JT. Improved Nyström low rank approximation and error analysis. ICML; 2008. 1232–1239.
  • Cortes C, Kumar S, Mohri M, et al. Very large-scale low rank approximation.
  • Kumar Sanjiv. M. Mohri and A. Talwalkar, Sampling methods for Nyström method. J Mach Learn Res. 2012;13:981–1006.
  • Li M, Kwok JT, Lu BL. Making large-scale Nyström approximation possible. Proceeding of 27th International Conference on Machine Learning. Haifa, Israel: 2010, p. 631–638.
  • Nemtsov A, Averbuch A, Schclar A. Matrix compression using the Nyström method. Intell. data Anal. 2016;20:997–1019. Available from: http://www.cs.tau.ac.il/~amir1/PS/Subsampling.pdf
  • Civril A, Ismail MM. On selecting a maximum volume submatrix of a matrix and related problems. Theor Comput Sci. 2009;410:4801–4811.
  • Bebendorf M. Hierarchical matrices. Vol. 63, Lecture notes in computer science and engineering. Berlin: Springer.
  • Bebendorf M. Approximation of boundary element matrices. Numer Math. 2000;86:565–589.
  • Goreinov SA, Oseledets IV, Savostyanov DV, et al. How to find a good submatrix, Research Report 08–10, Kowloon Tong, Hong Kong: ICM HKBU, 2008 [World Scientific Publishers. Singapore. 2010;247–256.
  • Goreinov SA, Tyrtyshnikov EE. Quasioptimality of skeleton approximation of a matrix in the Chebyshev norm. Doklady Math. 2011;83(3):374–375.
  • Goreinov SA, Zamarashkin NL, Tyrtyshnikov EE. Pseudo-skeleton approximations by matrices of maximal volume. Math Notes. 1997;62(4):515–519.
  • Zhu X, Lin W. Randomized pseudo-skeleton approximation and its applications in electromagnetics. Electron Lett. 2011;47(10):590–592.
  • Carin L. Fast electromagnetic solvers for large scale naval scattering problems. 2008. (Technical Report).
  • Chiu J, Demanet L. Sublinear randomized algorithms for skeleton decompositions. SIAM J Matrix Anal Appl. 2013;34(3):1361–1383.
  • Schmidt E. Zur theorie der linearen und nicht linearn integralgleichungen I. Math Ann. 1907;63:433–476.
  • Cheney EW, Light WA. Approximation theory in tensor product spaces. Vol. 1169, Lecture notes in mathematics. Berlin: Springer-Verlag; 1985.
  • Cheney EW. The best approximation of multivariate functions by combinations of univariate ones. Vol. IV, Approximation theory. New York (NY): Academic Press; 1983. College station, Tex.
  • Schneider J. Error estimates for two-dimensional cross approximation. J Approx Theory. 2010;162(9):1685–1700.
  • Naraparaju KK, Schneider J. Generalized cross approximation for 3d-tensors. Comput Vis Sci. 2011;14(3):105–115.
  • Oseledets IV, Savostyanov DV, Tyrtyshnikov EE. Tucker dimensionality reduction of the three dimensional arrays in linear time. SIAM J Matrix Anal Appl. 2008;30(3):939–956.
  • Aminfar A, Ambikasaran S, Darve E. A fast block low-rank dense solver with applications to finite element matrices. 2014. arXiv:1403.5337v1.
  • Bebendorf M, Rjasanow S. Adaptive low rank approximation of collocation matrices. Computing. 2003;70:1–24.
  • Flad HJ, Khoromskij BN, Savostyanov D, et al. Verification of the cross 3d algorithm on quantum chemistry data. Russ. J Numer Anal Math Model. 2008;23(4):329–344.
  • Hackbusch W. Hierarchische matrizen. Springer, Berlin: Algorithmen und analysis; 2009.
  • Kurz S, Rain O, Rjasanow S. The adaptive cross approximation technique for the 3 – D boundary element method. IEEE Trans Magn. 2002;38(2):421–424.
  • Rjasanow S, Steinbach O. The fast solution of boundary integral equations. New York: Springer; 2007.
  • Rogus B. The adaptive cross approximation algorithm applied to electro magnetic scattering by bodies of revolution. Duquesne University; 2008.
  • Zhao K, Vouvakis MN, Lee Jin-Fa. The adaptive cross approximation algorithm for accelerated method of moments computations of EMC problems. IEEE Trans Electromagnet Compat. 2005;47(4):763–773.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.