References
- ABA. (2020). The good, bad and ugly of new risk-assessment tech in criminal justice. American Bar Association. Chicago. Retrieved from https://www.americanbar.org/news/abanews/aba-news-archives/2020/02/the-good–bad-and-ugly-of-new-risk-assessment-tech-in-criminal-j/
- Ægisdóttir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., … Rush, J. D.,(2006). The meta-analysis of clinical judgment project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counseling Psychologist, 34(3), 341–382. doi:https://doi.org/10.1177/0011000005285875
- Altheide, D. L., & Coyle, M. J. (2006). Smart on crime: The new language for prisoner release. Crime, Media, Culture: An International Journal, 2(3), 286–303. doi:https://doi.org/10.1177/1741659006069561
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. New York. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Bartolucci, A. A., Singh, K. P., & Bae, S. (2016). Introduction to statistical analysis of laboratory data (1st ed.). Wiley. doi:https://doi.org/10.1002/9781118736890
- Bechtel, K., Holsinger, A. M., Lowenkamp, C. T., & Warren, M. J. (2017). A meta-analytic review of pretrial research: Risk assessment, bond type, and interventions. American Journal of Criminal Justice, 42(2), 443–467. doi:https://doi.org/10.1007/s12103-016-9367-1
- Bechtel, K., Revicki, J., & Champney, J. (2017, April). The importance of data-driven risk assessment. Delaware’s Story. ICRN Conference. Boston: Crime and Justice Institute.
- Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2018). Fairness in criminal justice risk assessments: The state of the Art. Sociological Methods & Research, 1–42. doi:https://doi.org/10.1177/0049124118782533
- Brauneis, R., & Goodman, E. P. (2018). Algorithmic transparency for the smart city. The Yale Journal of Law and Technology, 20, 103–175. doi:https://doi.org/10.2139/ssrn.3012499
- Brigham, K. (2019). Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse. CNBC. Englewood Cliffs, NJ. Retrieved from https://www.cnbc.com/2019/03/16/artificial-intelligence-algorithms-in-the-criminal-justice-system.html
- Bzdok, D., Altman, N., & Krzywinski, M. (2018). Statistics versus machine learning. Nature Methods, 15(4), 233–234. doi:https://doi.org/10.1038/nmeth.4642
- Cadigan, T. P., & Lowenkamp, C. T. (2011). Implementing risk assessment in the federal pretrial services system. Federal Probation, 75(2), 30–34.
- Coglianese, C., & Ben Dor, L. M. (2020). AI in adjudication and administration. Brooklyn Law Review (2118).
- Cohen, T. H., & Lowenkamp, C. (2018). Revalidation of the federal pretrial risk assessment instrument (PTRA): Testing the PTRA for predictive biases. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.3191533
- Cohen, T. H., Lowenkamp, C. T., & Hicks, W. E. (2018). Revalidating the federal pretrial risk assessment instrument (PTRA): A research summary. Federal Probation, 82(2), 23–29.
- Danner, M. J. E., VanNostrand, M., & Spruance, L. M. (2016). Race and gender neutral pretrial risk assessment release recommendations, and supervision: VPRAI and praxis revised. Tanglewood: Luminosity Inc.
- DeMichele, M., Baumgartner, P., Wenger, M., Barrick, K., Comfort, M., & Misra, S. (2018). The public safety assessment: A re-validation and assessment of predictive utility and differential prediction by race and gender in Kentucky. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.3168452
- Desmarais, S. L., Johnson, K. L., & Singh, J. P. (2016). Performance of recidivism risk assessment instruments in U.S. correctional settings. Psychological Services, 13(3), 206–222. doi:https://doi.org/10.1037/ser0000075
- Dirick, L., Claeskens, G., & Baesens, B. (2017). Time to default in credit scoring using survival analysis: A benchmark study. Journal of the Operational Research Society, 68(6), 652–665. doi:https://doi.org/10.1057/s41274-016-0128-9
- Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. doi:https://doi.org/10.1126/sciadv.aao5580
- Eckhouse, L., Lum, K., Conti-Cook, C., & Ciccolini, J. (2019). Layers of bias: A unified approach for understanding problems with risk assessment. Criminal Justice and Behavior, 46(2), 185–209. doi:https://doi.org/10.1177/0093854818811379
- EPIC. (2020). Algorithms in the criminal justice system: Pre-trial risk assessment tools. Electronic Privacy Information Center. Washington, D.C. Retrieved from https://epic.org/algorithmic-transparency/crim-justice/
- Garrett, B. L., & Monahan, J. (2020). Judging risk. California Law Review, 108, 439–493.
- Goel, S., Shroff, R., Skeem, J. L., & Slobogin, C. (2018). The accuracy, equity, and jurisprudence of criminal risk assessment. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.3306723
- Gottfredson, S. D., & Moriarty, L. J. (2006). Statistical risk assessment: Old problems and new applications. Crime & Delinquency, 52(1), 178–200. doi:https://doi.org/10.1177/0011128705281748
- Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: John Wiley & Sons, Inc.
- Hand, D. J. (2009). Measuring classifier performance: A coherent alternative to the area under the ROC curve. Machine Learning, 77(1), 103–123. doi:https://doi.org/10.1007/s10994-009-5119-5
- Hanley, J. A., & McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1), 29–36. doi:https://doi.org/10.1148/radiology.143.1.7063747
- Hannah-Moffat, K. (2019). Algorithmic risk governance: Big data analytics, race and information activism in criminal justice debates. Theoretical Criminology, 23(4), 453–470. doi:https://doi.org/10.1177/1362480618763582
- Hanson, R. K., & Morton-Bourgon, K. E. (2009). The accuracy of recidivism risk assessments for sexual offenders: A meta-analysis of 118 prediction studies. Psychological Assessment, 21(1), 1–21. doi:https://doi.org/10.1037/a0014421
- Harris, H. M., Goss, J., & Gumbs, A. (2019). Pretria/risk assessment in California. San Francisco: Public Policy Institute of California.
- Lin, Z., Jung, J., Goel, S., & Skeem, J. (2020). The limits of human predictions of recidivism. Science Advances, 6(7), eaaz0652. doi:https://doi.org/10.1126/sciadv.aaz0652
- King, G., & Zeng, L. (2001). Logistic regression in rare events data. Political Analysis, 9(2), 137–163. doi:https://doi.org/10.1093/oxfordjournals.pan.a004868
- Koepke, J. L., & Robinson, D. G. (2018). Danger ahead: Risk assessment and the future of bail reform. Washington Law Review, 93, 1725–1807.
- Lantz, B. (2015). Machine learning with R: Discover how to build machine learning algorithms, prepare data, and dig deep into data prediction techniques with R (2nd ed.). Community experience distilled. Birmingham; Mumbai: Packt Publishing.
- Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31(4), 611–627. doi:https://doi.org/10.1007/s13347-017-0279-x
- Leushuis, E., van der Steeg, J. W., Steures, P., Bossuyt, P. M., Eijkemans, M. C., van der Veen, F., … Hompes, P. A. (2009). Prediction models in reproductive medicine: A critical appraisal†. Human Reproduction Update, 15(5), 537–552. doi:https://doi.org/10.1093/humupd/dmp013
- Marqués, A. I., García, V., & Sánchez, J. S. (2012). Two-level classifier ensembles for credit risk assessment. Expert Systems with Applications, 39(12), 10916–10922. doi:https://doi.org/10.1016/j.eswa.2012.03.033
- Mayson, S. (2018). Bias in, bias out. Yale Law Journal, 128(8), 2218–2300.
- McKay, C. (2020). Predicting risk in criminal procedure: Actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22–39. doi:https://doi.org/10.1080/10345329.2019.1658694
- Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press. doi:https://doi.org/10.1037/11281-000.
- Monahan, J., & Skeem, J. L. (2016). Risk assessment in criminal sentencing. Annual Review of Clinical Psychology, 12(1), 489–513. doi:https://doi.org/10.1146/annurev-clinpsy-021815-092945
- Northpointe. (2019). Practitioner’s guide to COMPAS core. Traverse City: Northpointe Inc.
- Padhy, S., Takkar, B., Chawla, R., & Kumar, A. (2019). Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian Journal of Ophthalmology, 67(7), 1004–1009. doi:https://doi.org/10.4103/ijo.IJO_1989_18
- Percival, G. (2016). Smart on crime: The struggle to build a better American penal system. Boca Raton, FL: CRC Press.
- Podkopacz, M. R. (2018). Hennepin county 2015 adult pretrial scale: Revalidation. Minneapolis: Fourth Judicial District of Minnesota-Hennepin County Research Division.
- Podkopacz, M. R., & Eckberg, D. (2006). Fourth judicial district pretrial evaluation: Scale validation study. Minneapolis: Fourth Judicial District of Minnesota-Hennepin County Research Division.
- Pretrial Justice Institute and JFA Institute. (2012, October 19). The Colorado Pretrial Assessment Tool (CPAT). Revised Report. Baltimore: retrial Justice Institute and JFA Institute.
- Rothschild-Elyassi, G., Koehler, J., & Simon, J. (2019). Actuarial justice. In M. Deflem (Ed.), The Handbook of social control (pp. 194–206). Chichester: John Wiley & Sons, Ltd. doi:https://doi.org/10.1002/9781119372394.ch14
- Saito, T., & Rehmsmeier, M. (2015). The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS One, 10(3), 1–21. doi:https://doi.org/10.1371/journal.pone.0118432
- Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., … Nosek, B. A. (2018). Many analysts, one data set: Making transparent how variations in analytic choices affect results. Advances in Methods and Practices in Psychological Science, 1(3), 337–356. doi:https://doi.org/10.1177/2515245917747646
- Singh, J. P., Desmarais, S. L., & Van Dorn, R. A. (2013). Measurement of predictive validity in violence risk assessment studies: A second-order systematic review. Behavioral Sciences & the Law, 31(1), 55–73. doi:https://doi.org/10.1002/bsl.2053
- Singh, J. P., Grann, M., & Fazel, S. (2011). A comparative study of violence risk assessment tools: A systematic review and metaregression analysis of 68 studies involving 25,980 participants. Clinical Psychology Review, 31(3), 499–513. doi:https://doi.org/10.1016/j.cpr.2010.11.009
- Stevenson, M., & Doleac, J. L. (2019). Algorithmic risk assessment in the hands of humans. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.3489440
- Steyerberg, E. W., Vickers, A. J., Cook, N. R., Gerds, T., Gonen, M., Obuchowski, N., … Kattan, M. W. (2010). Assessing the performance of prediction models: A framework for traditional and novel measures. Epidemiology, 21(1), 128–138. doi:https://doi.org/10.1097/EDE.0b013e3181c30fb2
- Terranova, V. A., & Ward, K. (2020). Colorado Pretrial Assessment Tool Validation Study - Final Report. Denver: Department of Criminology and Criminal Justice; University of Northern Colorado.
- Terranova, V. A., Ward, K., Slepicka, J., & Azari, A. M. (2020). Perceptions of pretrial risk assessment: An examination across role in the initial pretrial release decision. Criminal Justice and Behavior, 47(8), 927–942. doi:https://doi.org/10.1177/0093854820932204
- Turner, J. (2019). Robot rules: Regulating artificial intelligence. Cham: Springer International Publishing. doi:https://doi.org/10.1007/978-3-319-96235-1
- van der Voort, H. G., Klievink, A. J., Arnaboldi, M., & Azari, A. M. (2019). Rationality and politics of algorithms: Will the promise of big data survive the dynamics of public decision making? Government Information Quarterly, 36(1), 27–38. doi:https://doi.org/10.1016/j.giq.2018.10.011
- VanNostrand, M., & Keebler, G. (2009). Pretrial risk assessment in the federal court. Washington, DC: US Department of Justice Office of Federal Detention Trustee.
- Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ‘18, Montreal QC, Canada, 2018, pp. 1–14. ACM Press. doi:10.1145/3173574.3174014.
- Wasserstein, R. L., & Lazar, N. A. (2016). The ASA statement on p-values: Context, process, and purpose. The American Statistician, 70(2), 129–133. doi:https://doi.org/10.1080/00031305.2016.1154108
- Watt, J., Borhani, R., & Katsaggelos, A. K. (2020). Machine learning refined: Foundations, algorithms, and applications (2nd ed.). New York: Cambridge University Press.
- Završnik, A. (2019). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology. doi:https://doi.org/10.1177/1477370819876762
- Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Algorithmic decision-making and the control problem. Minds and Machines, 29(4), 555–578. doi:https://doi.org/10.1007/s11023-019-09513-7
- Zweig, K. A., Wenzelburger, G., & Krafft, T. D. (2018). On chances and risks of security related algorithmic decision making systems. European Journal for Security Research, 3(2), 181–203. doi:https://doi.org/10.1007/s41125-018-0031-2