134
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Investigating the Effect of Randomly Selected Feature Subsets on Bagging and Boosting

, &
Pages 636-646 | Received 13 Jul 2012, Accepted 15 Mar 2013, Published online: 10 Sep 2014

References

  • Asuncion, A., Newman, D. (2007). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science. Available at: http://www.ics.uci.edu/∼mlearn/MLRepository.html.
  • Breiman, L. (1996). Bagging predictors. Machine Learning 24(2):123–140.
  • Breiman, L. (1998). Arcing classifiers. Annals of Statistics 26(3): 801–849.
  • Breiman, L., Friedman, J., Olshen, R., Stone, C. (1984). Classification and Regression Tree. New York: Chapman & Hall.
  • Chandra, A., Yao, X. (2006). Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing 69(7–9):686–700.
  • Dietterich, T. (2000). An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning 40(2):139–157.
  • Efron, B., Tibshirani, R. (1993). An Introduction to the Bootstrap, New York: Chapman & Hall.
  • Freund, Y., Schapire, R. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1):119–139.
  • Hansen, L.K., Salamon, P. (1990). Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(10):993–1001.
  • Ho, T.K. (1998). The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8):832–844.
  • Meir, R., Rätsch, G. (2003). An introduction to boosting and leveraging. In: Advanced Lectures on Machine Learning. Lecture Notes in Computer Science 2600:118–183.
  • Optiz, D., Maclin, R. (1999). Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research 11:169–198.
  • Opitz, D.W., Shavlik, J.W. (1996). Generating accurate and diverse members of a neural-network ensemble. Advances in Neural Information Processing Systems 8: 535–541.
  • Rodríguez, J.J., Kuncheva, L.I., Alonso, C.J. (2006). Rotation forest: A new classifier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10):1619–1630.
  • Rokach, L. (2010). Ensemble-based classifiers. Artificial Intelligence Review 33(1–2):1–39.
  • Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S. (1998). Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics 26(5):1651–1686.
  • Skurichina, M., Duin, R. P.W. (2005). Combining feature subsets in feature selection. In: Multiple Classifier System, Lecture Notes in Computer Sciences 2541:165–175.
  • Tresp, V. (2001). Committee machines. In: Hu, Y.H., Hwang, J.N., eds. Handbook for neural network signal processing, Boca Raton, FL: CRC Press.
  • Turney, P. (1995). Technical note: Bias and quantification of stability. Machine Learning 20(1–2): 23–33.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.