REFERENCES
- Achlioptas, D. (2003), Database-Friendly Random Projections: Johnson-Lindenstrauss With Binary Coins, Journal of Computer and System Sciences, 66, 671–687.
- Amaldi, E., and Kann, V. (1998), On The Approximation of Minimizing Non-Zero Variables or Unsatisfied Relations in Linear Systems, Theoretical Computer Science, 209, 237–260.
- Armagan, A., Dunson, D.B., and Lee, J. (2013), Generalized Double Pareto Shrinkage, Statistica Sinica, 23, 119–143.
- Bhattacharya, A., Pati, D., Pillai, N., and Dunson, D.B. (2012), Bayesian Shrinkage, available at Arxiv Preprint arxiv:1212.6088.
- Candes, E.A., Romberg, J., and Tao, T. (2006), Stable Signal Recovery From Incomplete and Inaccurate Measurements, Communications in Pure and Applied Mathematics, 59, 1207–1223.
- Candes, E.A., and Tao, T. (2005), Decoding by Linear Programming, IEEE Transactions Information Theory, 51, 4203–4215.
- ——— (2007), The Dantzig Selector: Statistical Estimator When p is Much Larger than n, The Annals of Statistics, 35, 2313–2351.
- Carvalho, C.M., Polson, N.G., and Scott, J.G. (2009), Handling Sparsity via The Horseshoe, Journal of Machine Learning Research, 5, 73–80.
- ——— (2010), The Horseshoe Estimator for Sparse Signals, Biometrika, 97, 465–480.
- Cook, R.D. (1998), Regression Graphics:Ideas for Studying Regression Through Graphics, New York: Wiley.
- Dasgupta, S. (2013), Experiments With Random Projection, available at Arxiv Preprint arxiv:1301.3849.
- Dasgupta, S., and Freund, Y. (2008), “Random Projection Trees and Low Dimensional Manifolds,” in Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 537–546.
- Dasgupta, S., and Gupta, A. (2003), An Elementary Proof of The Theorem of Johnson and Lindenstrauss, Random Structures and Algorithms, 22, 60–65.
- Davenport, M., Boufounos, P.T., Wakin, M., and Baraniuk, R. (2010), Signal Processing With Compressive Measurements, Selected Topics in Signal Processing, IEEE Journal of, 4, 445–460.
- Davenport, M., Durate, M., Wakin, M., Laska, J., Takhar, D., Kelly, K., and Baraniuk, R. (2007), “The Smashed Filter for Compressive Classification and Target Recognition,” in Proceedings of Computational Imaging V.
- Donoho, D. (2006), Compressed Sensing, IEEE Transactions on Information Theory, 52, 1289–1306.
- DuMouchel, W. (1999), Data Squashing: Constructing Summary Data Sets, available at http://www.cs.princeton.edu/courses/archive/spr04/cos598B/bib/DuMouchel.pdf.
- Dunson, D.B., Watson, M., and Taylor, J.A. (2003), Bayesian Latent Variable Models for Median Regression on Multiple Regression, Biometrics, 59, 296–304.
- Faes, C., Ormerod, J.T., and Wand, M.P. (2011), Variational Bayesian Inference for Parametric and Nonparametric Regression With Missing Data, Journal of the American Statistical Association, 106, 959–971.
- Fard, M.M., Grinberg, Y., Pineau, J., and Precup, D. (2012), “Compressed Least-Squares Regression on Sparse Spaces,” in Proceedings of the 26th AAAI Conference on Artificial Intelligence, pp. 1054–1060.
- Fradkin, D., and Madigan, D. (2003), “Experiments With Random Projections for Machine Learning,” in Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 517–522.
- Ghosal, S., Ghosh, J.K., and Van Der Vaart, A.W. (2000), Convergence Rates of Posterior Distributions, The Annals of Statistics, 28, 500–531.
- Ghosal, S., and Van Der Vaart, A.W. (2001), Entropies and Rates of Convergence for Bayes and Maximum Likelihood Estimation for Mixture of Normal Densities, The Annals of Statistics, 29, 1233–1263.
- ——— (2007), Convergence Rates of Posterior Distributions for Non iid Observations, The Annals of Statistics, 35, 192–223.
- Girolami, M., and Rogers, S. (2006), Variational Bayesian Multinomial Probit Regression With Gaussian Process Priors, Neural Computation, 18, 1790–1817.
- Gramacy, R.B. (2010), Estimation for Multivariate Normal and Student-t Data With Monotone Missingness Monomvn Package Manual.” Available at http://cran.r-project.org/web/packages/monomvn/monomvn.pdf.
- Hans, C. (2009), Bayesian Lasso Regression, Biometrika, 96, 835–845.
- Hastie, T., and Efron, B. (2012), Lars Package Manual.” Available at http://cran.r-project.org/web/packages/lars/lars.pdf.
- Jiang, W. (2007), Bayesian Variable Selection for High Dimensional Generalized Linear Models: Convergence Rates of The Fitted Densities, The Annals of Statistics, 35, 1487–1511.
- Joshi, C.M., and Bissu, S.K. (1991), Some Inequalities of Bessel and Modified Bessel Functions, Journal of Australian Math Society, Series A, 50, 333–342.
- Krishnan, S., Bhattacharyya, C., and Hariharan, R. (2007), A Randomized Algorithm for Large Scale Support Vector Learning, Advances in Neural Information Processing Systems (NIPS), 20, eds. J. C. Platt, D. Koller, Y. Singer, and S. Roweis, Cambridge, MA: MIT Press.
- Lee, H. K.H., Taddy, M., and Gray, G.A. (2008), Selection of a Representative Sample, Journal of Classification, 27, 41–53.
- Li, P., Shrivastava, A., Moore, J., and Konig, A.C. (2011), Hashing Algorithms for Large-Scale Learning, Advances in Neural Information Processing Systems (NIPS), 24, 2672–2680.
- Madigan, D., Raghavan, N., and Dumouchel, W. (2002), Likelihood-Based Data Squashing: A Modeling Approach to Instance Construction, Data Mining and Knowledge Discovery, 6, 173–190.
- Maillard, O.A., and Munos, R. (2009), Compressed-Least Squares Regression, CiteSeerX 10.1.1.153.8922.
- Olive, P.L., Banath, J.P., and Durand, R.E. (1990), Heterogeneity in Radiation-Induced DNA Damage and Repair in Tumour and Normal Cells Measured Using The Comet Assay, Radiation Research, 112, 86–94.
- Ormerod, J.T., and Wand, M.P. (2012), Gaussian Variational Approximate Inference for Generalized Linear Mixed Models, Journal of Computational and Graphical Statistics, 21, 2–17.
- Owen, A. (2003), Data Squashing by Empirical Likelihood, Data Mining and Knowledge Discovery, 7, 101–113.
- Park, T., and Casella, G. (2008), The Bayesian Lasso, Journal of the American Statistical Association, 103, 681–686.
- Raftery, A.E., Madigan, D., and Hoeting, J.A. (1997), Bayesian Model Averaging for Linear Regression Models, Journal of the American Statistical Association, 92, 179–191.
- Ripley, B. (2012), MASS Package Manual. Available at http://cran.r-project.org/web/packages/MASS/MASS.pdf.
- Shi, Q., Petterson, J., Dror, G., Langford, J., Smola, A., and Viswanathan, S. V.N. (2009), Hash Kernels for Structured Data, Journal of Machine Learning Research, 11, 2615–2637.
- Strawn, N., Armagan, A., Saab, R., Carin, L., and Dunson, D.B. (2012), Finite Sample Posterior Concentration in High Dimensional Regression, Arxiv Preprint arxiv:1207.4854.
- Tibshirani, R. (1996), Regression Selection and Shrinkage via The Lasso. Journal of the Royal Statistical Society, Series B, 58, 267–288.
- Titsias, M.K., and Lawrence, N.D. (2010), “Bayesian Gaussian Process Latent Variable Model,” in 13th International Conference on Artificial Intelligence and Statistics (AISTAT), pp. 844–851.
- Tokdar, S.T., Zhu, Y.M., and Ghosh, J.K. (2010), Bayesian Density Regression With Logistic Gaussian Process and Subspace Projection, Bayesian Analysis, 5, 319–344.
- Zhou, H., Li, L., and Zhu, H. (2012), Tensor Regression With Application in Neuroimaging Data Analysis. Available at Arxiv Preprint arxiv:1203.3209.
- Zhou, S., Lafferty, J., and Wasserman, L. (2009), Compressed and Privacy-Sensitive Sparse Regression, IEEE Transactions on Information Theory, 55, 846–866.
- Zou, H., and Hastie, T. (2005), Regularization and Variable Selection via the Elastic Net, Journal of the Royal Statistical Society, Series B, 67, 301–320.