3,847
Views
98
CrossRef citations to date
0
Altmetric

REFERENCES

  • Amit, Y., and Geman, D. (1997), “Shape Quantization and Recognition With Randomized Trees,” Neural Computing, 9, 1545–1588.
  • Biau, G. (2012), “Analysis of a Random Forests Model,” Journal of Machine Learning Research, 13, 1063–1095.
  • Biau, G., Devroye, L., and Lugosi, G. (2008), “Consistency of Random Forests and Other Averaging Classifiers,” Journal of Machine Learning Research, 9, 2015–2033.
  • Breiman, L. (1996), “Bagging Predictors,” Machine Learning, 24, 123–140.
  • Breiman, L. (2000), “Some Infinity Theory for Predictor Ensembles,” Technical Report 577, Department of Statistics, University of California, Berkeley.
  • Breiman, L. (2001), “Random Forests,” Machine Learning, 45, 5–32.
  • Breiman, L., Friedman, J., Olshen, R., and Stone, C. (1984), Classification and Regression Trees, Pacific Grove, CA: Wadsworth International.
  • Bureau, A., Dupuis, J., Falls, K., Lunetta, K.L., Hayward, B., Keith, T.P., and Van Eerdewegh, P. (2005), “Identifying SNPs Predictive of Phenotype Using Random Forests,” Genetic Epidemiology, 28, 171–182.
  • Chipman, H.A., George, E.I., and McCulloch, R.E. (2010), “BART: Bayesian Additive Regression Trees,” Annals of Applied Statistics, 4, 266–298.
  • Cutler, A., and Zhao, G. (2001), “PERT-Perfect Random Tree Ensembles,” Computing Science and Statistics, 33, 490–497.
  • Díaz-Uriarte, R., and De Andres, S.A. (2006), “Gene Selection and Classification of Microarray Data Using Random Forest,” BMC Bioinformatics, 7, 3.
  • Dietterich, T. (2000), “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting and Randomization,” Machine Learning, 40, 139–157.
  • Friedman, J. H. (2001), “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of Statistics, 29, 1189--1232.
  • Friedman, J. H., Hastie, T., and Tibshirani, R. (2010), “Regularization Paths for Generalized Linear Models via Coordinate Descent,” Journal of Statistical Software, 33, 1--22.
  • Geurts, P., Ernst, D., and Wehenkel, L. (2006), “Extremely Randomized Trees,” Machine Learning, 63, 3–42.
  • Ishwaran, H., Kogalur, U.B., Blackstone, E.H., and Lauer, M.S. (2008), “Random Survival Forests,” The Annals of Applied Statistics, 2, 841–860.
  • Kim, H., and Loh, W.-Y. (2001), “Classification Trees With Unbiased Multiway Splits,” Journal of the American Statistical Association, 96, 589–604.
  • Lin, Y., and Jeon, Y. (2006), “Random Forests and Adaptive Nearest Neighbors,” Journal of the American Statistical Association, 101, 578–590.
  • Lunetta, K.L., Hayward, L.B., Segal, J., and Van Eerdewegh, P. (2004), “Screening Large-Scale Association Study Data: Exploiting Interactions Using Random Forests,” BMC Genetics, 5, 32.
  • Murthy, K. V.S., and Salzberg, S.L. (1995), “On Growing Better Decision Trees From Data,” Ph.D. dissertation, Johns Hopkins University, Baltimore, MD.
  • Murthy, S.K., Kasif, S., and Salzberg, S. (1994), “A System for Induction of Oblique Decision Trees,” arXiv preprint cs/9408103.
  • Statnikov, A., Wang, L., and Aliferis, C.F. (2008), “A Comprehensive Comparison of Random Forests and Support Vector Machines for Microarray-Based Cancer Classification,” BMC Bioinformatics, 9, 319.
  • Strobl, C., Boulesteix, A.-L., Zeileis, A., and Hothorn, T. (2007), “Bias in Random Forest Variable Importance Measures: Illustrations, Sources and a Solution,” BMC Bioinformatics, 8, 25.
  • Sutton, R.S., and Barto, A.G. (1998), Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press.
  • Tibshirani, R. (1996), “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical Society, Series B, 58, 267–288.
  • van der Vaart, A., and Wellner, J. (1996), Weak Convergence and Empirical Processes: With Applications to Statistics, New York: Springer.
  • Zhao, P., and Yu, B. (2006), “On Model Selection Consistency of Lasso,” Journal of Machine Learning Research, 7, 2541–2563.
  • Zhu, R., and Kosorok, M.R. (2012), “Recursively Imputed Survival Trees,” Journal of the American Statistical Association, 107, 331–340.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.