References
- L. Breiman, Random forests, Mach. Learn. 45 (2001), pp. 5–32. doi: 10.1023/A:1010933404324
- M. Hamza and D. Larocque, An empirical comparison of ensemble methods based on classification trees, J. Statist. Comput. Simul. 75 (2005), pp. 629–643. doi: 10.1080/00949650410001729472
- G. Biau, L. Devroye, and G. Lugosi, Consistency of random forests and other averaging classifiers, J. Mach. Learn. Res. 9 (2008), pp. 2015–2033.
- L. Rokach, Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography, Comput. Stat. Data Anal. 53 (2009), pp. 4046–4072. doi: 10.1016/j.csda.2009.07.017
- D.S. Siroky, Navigating random forests and related advances in algorithmic modeling, Stat. Surveys 3 (2009), pp. 147–163. doi: 10.1214/07-SS033
- A. Verikas, A. Gelzinis, and M. Bacauskiene, Mining data with random forests: A survey and results of new tests, Pattern Recognit. 44 (2011), pp. 330–349. doi: 10.1016/j.patcog.2010.08.011
- A. Hajjem, F. Bellavance, and D. Larocque, Mixed effects regression trees for clustered data, Stat. Probab. Lett. 81 (2011), pp. 451–459. doi: 10.1016/j.spl.2010.12.003
- L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone, Classification and Regression Trees, Wadsworth, Belmont, CA, 1984.
- A.P. Dempster, N.M. Laird, and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. R. Statist. Soc. Ser. B 39 (1977), pp. 1–38.
- G.J. McLachlan and T. Krishman, The EM Algorithm and Extensions, Wiley, New York, 1997.
- R.J. Sela and J.S. Simonoff, RE-EM trees: A data mining approach for longitudinal and clustered data, Mach. Learn. 86 (2012), pp. 169–207. doi: 10.1007/s10994-011-5258-3
- S.W. Raudenbush and A.S. Bryk, Hierarchical Linear Models: Applications and Data Analysis Method, 2nd ed., Sage, Newbury Park, CA, 2002.
- H. Wu and J.T. Zhang, Nonparametric regression methods for longitudinal data analysis: Mixed-effects modeling approaches, Wiley, New York, 2006.
- R Development Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, 2007. Available at www.R-project.org.
- A. Liaw and M. Wiener, Classification and regression by randomForest, R News 2 (2002), pp. 18–22.
- T.M. Therneau and E.J. Atkinson, An introduction to recursive partitioning using the rpart routines, Tech. Rep. 61, Department of Health Science Research, Mayo Clinic, Rochester, 1997.
- A.S. Bryk and S.W. Raudenbush, Application of hierarchical linear models to assessing change, Psychol. Bull. 101 (1987), pp. 147–158. doi: 10.1037/0033-2909.101.1.147
- R.I. Jennrich and M.D. Schluchter, Unbalanced repeated-measures with structured covariance matrices, Biometrics 42 (1986), pp. 805–820. doi: 10.2307/2530695
- C.A. Field and A.H. Welsh, Bootstrapping clustered data, J. R. Statist. Soc. Ser. B 69 (2007), pp. 369–390. doi: 10.1111/j.1467-9868.2007.00593.x
- Y.V. Karpievitch, E.G. Hill, A.P. Leclerc, A.R. Dabney, and J.S. Almeida, An introspective comparison of random forest-based classifiers for the analysis of cluster-correlated data by way of RF++, PLoS ONE 4(9) (2009), Article no. e7087. doi: 10.1371/journal.pone.0007087
- W. Adler, S. Potapov, and B. Lausen, Classification of repeated measurements data using tree-based ensemble methods, Computational Statistics 26 (2011), pp. 355–369. doi: 10.1007/s00180-011-0249-1