References
- Akay, K. U., and M. Tez. 2011. Alternative modeling techniques for the quantal response data in mixture experiments. Journal of Applied Statistics 38 (11):2597–616. doi:https://doi.org/10.1080/02664763.2011.559214.
- Barboza, F., H. Kimura, and E. Altman. 2017. Machine learning models and bankruptcy prediction. Expert Systems with Applications 83:405–17. doi:https://doi.org/10.1016/j.eswa.2017.04.006.
- Bolker, B. and Team, R. D. C. 2017. bbmle: Tools for General Maximum Likelihood Estimation R Development Core Team.
- Box, G. E. P., and N. R. Draper. 2007. Response surfaces, mixtures, and ridge analyses. 2nd ed. New Jersey: John Wiley & Sons, Inc.
- Bozdogan, H. 2000. Akaike’s information criterion and recent developments in Information Complexity. Journal of Mathematical Psychology 44 (1):62–91. doi:https://doi.org/10.1006/jmps.1999.1277.
- Brown, L., A. N. Donev, and A. C. Bissett. 2015. General blending models for data from mixture experiments. Technometrics 57 (4):449–56. doi:https://doi.org/10.1080/00401706.2014.947003.
- Bühlmann, P., and T. Hothorn. 2007. Boosting algorithms: regularization, prediction and model fitting. Statistical Science 22 (4):477–505. doi:https://doi.org/10.1214/07-STS242.
- Bühlmann, P., and B. Yu. 2003. Boosting with the L2 loss. Journal of the American Statistical Association 98 (462):324–39. doi:https://doi.org/10.1198/016214503000125.
- Cao, D.-S., Q.-S. Xu, Y.-Z. Liang, L.-X. Zhang, and H.-D. Li. 2010. The boosting: A new idea of building models. Chemometrics and Intelligent Laboratory Systems 100 (1):1–11. doi:https://doi.org/10.1016/j.chemolab.2009.09.002.
- Chen, J. J., L. A. Li, and C. D. Jackson. 1996. Analysis of quantal response data from mixture expreriments. Environmetrics 7 (5):503–12. doi:https://doi.org/10.1002/(SICI)1099-095X(199609)7:5<503::AID-ENV227>3.3.CO;2-5.
- Cornell, J. A. 2002. Experiments with mixtures: Designs, models, and the analysis of mixture data 3rd ed. New York: John Wiley Sonc, Inc.
- Cribari-Neto, F., and A. Zeileis. 2010. Beta Regression in R. Journal of Statistical Software 34 (2):1–24. doi:https://doi.org/10.18637/jss.v034.i02.
- Cruz-Salgado, J. 2016. Comparing the intercept mixture model with the Slack-variable mixture model. Ingeniería, Investigación y Tecnología 17 (3):383–93. doi:https://doi.org/10.1016/j.riit.2016.07.008.
- de Menezes, F. S., G. R. Liska, M. A. Cirillo, and M. J. Vivanco. 2017. Data classification with binary response through the Boosting algorithm and logistic regression. Expert Systems with Applications 69:62–73. doi:https://doi.org/10.1016/j.eswa.2016.08.014.
- Ferrari, S., and F. Cribari-Neto. 2004. Beta regression for modelling rates and proportions. Journal of Applied Statistics 31 (7):799–815. doi:https://doi.org/10.1080/0266476042000214501.
- Freund, Y., and R. E. Schapire. 1996. Experiments with a new boosting algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning, 148–56. Bari, Italy: Morgan Kaufmann Publishers Inc.
- Friedman, J., T. Hastie, and R. Tibshirani. 2000. Additive logistic regression: A statistical view of boosting. The Annals of Statistics 28 (2):337–407. doi:https://doi.org/10.1214/aos/1016120463.
- Friedman, J. H. 2001. Greedy function approximation: A gradient boosting machine. The Annals of Statistics 29 (5):1189–232. doi:https://doi.org/10.1214/aos/1013203451.
- Glahn, V. P., J. J. Egozcue, and R. T. Delgado. 2015. Modeling and analysis of compositional data. United Kingdom: John Wiley & Sons, Inc.
- Hofner, B., A. Mayr, N. Robinzonov, and M. Schmid. 2014. Model-based boosting in R: A hands-on tutorial using the R package mboost. Computational Statistics 29 (1-2):3–35. doi:https://doi.org/10.1007/s00180-012-0382-5.
- Hosmer, D. W., S. Lemeshow, R. X. Sturdivant, D. W. Hosmer, Jr., S. Lemeshow, and R. X. Sturdivant. 2013. Applied logistic regression. 3rd ed. Series in Probability and Statistics. New York: Wiley.
- Kruppa, J., A. Schwarz, G. Arminger, and A. Ziegler. 2013. Consumer credit risk: Individual probability estimates using machine learning. Expert Systems with Applications 40 (13):5125–31. doi:https://doi.org/10.1016/j.eswa.2013.03.019.
- Lawson, J., and C. Willden. 2016. Mixture Experiments in R Using mixexp. Journal of Statistical Software 72:1–20 (Code Snippet 2). doi:https://doi.org/10.18637/jss.v072.c02.
- Liska, G., F. De Menezes, M. Cirillo, F. Borém, R. Cortez, and D. Ribeiro. 2015. Evaluation of sensory panels of consumers of specialty coffee beverages using the boosting method in discriminant analysis. Semina: Ciências Agrárias 36 (6):3671–80. doi:https://doi.org/10.5433/1679-0359.2015v36n6p3671.
- López, F. O. 2013. A Bayesian approach to parameter estimation in simplex regression model: A comparison with beta regression. Revista Colombiana de Estadística 36 (1):1–21.
- McCulloch, C. E., S. R. Searle, and J. M. Neuhaus. 2008. Generalized, linear, and mixed models. New York: John Wiley Sonc, Inc.
- Menard, S. 2010. Logistic regression: From introductory to advanced concepts and applications. Los Angeles: Sage Publications, Inc.
- Moral, R. A., J. Hinde, and C. G. B. Demétrio. 2017. Half-normal plots and overdispersed models in R: The hnp Package. Journal of Statistical Software 81 (10):1–23. doi:https://doi.org/10.18637/jss.v081.i10.
- Natekin, A., and A. Knoll. 2013. Gradient boosting machines, a tutorial. Frontiers in Neurorobotics 7:1–21. doi:https://doi.org/10.3389/fnbot.2013.00021.
- Obermeyer, Z., and E. J. Emanuel. 2016. Predicting the future big data, machine learning, and clinical medicine. New England Journal of Medicine 375 (13):1216–9. doi:https://doi.org/10.1056/NEJMp1606181.
- Core Team, R. 2017. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
- Rizzo, M. L. 2007. Statistical computing with R. London: Chapman and Hall/CRC.
- Schmid, M., F. Wickler, K. O. Maloney, R. Mitchell, N. Fenske, and A. Mayr. 2013. Boosted Beta Regression. PLoS One 8 (4):e61623. doi:https://doi.org/10.1371/journal.pone.0061623.
- Souza, T. C., and F. Cribari-Neto. 2015. Intelligence, religiosity and homosexuality non-acceptance: Empirical evidence. Intelligence 52:63–70. doi:https://doi.org/10.1016/j.intell.2015.07.003.
- Warton, D. I., and F. K. C. Hui. 2011. The arcsine is asinine: The analysis of proportions in ecology. Ecology 92 (1):3–10. doi:https://doi.org/10.1890/10-0340.1.
- Yeh, C.-C., D.-J. Chi, and Y.-R. Lin. 2014. Going-concern prediction using hybrid random forests and rough set approach. Information Sciences 254:98–110. doi:https://doi.org/10.1016/j.ins.2013.07.011.
- Zhang, P., Z. Qiu, Z. Peng, and Q. ZenGuo. 2014. Regression analysis of proportional data using simplex distribution. SCIENTIA SINICA Mathematica 44 (1):89–104. doi:https://doi.org/10.1360/012013-200.