REFERENCES
- Akhtar, A. 2019. New York is investigating UnitedHealth’s use of a medical algorithm that steered black patients away from getting higher-quality care. https://www.businessinsider.com/an-algorithm-treatment-to-white-patients-over-sicker-black-ones-2019-10 (accessed September 1, 2020).
- Austen-Smith, D., and J. S. Banks. 1996. Information aggregation, rationality, and the condorcet jury theorem. American Political Science Review 90 (1):34–45.
- Beam, A. L., and I. S. Kohane. 2018. Big data and machine learning in health care. JAMA 319 (13):1317–1318.
- Brown, N., and T. Sandholm. 2019. Superhuman AI for multiplayer poker. Science 365 (6456):885–890.
- Corbett-Davies, S., E. Pierson, A. Feller, and S. Goel. 2016. A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. http://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/ (accessed September 1, 2020).
- Dastin, J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (accessed September 1, 2020).
- Gulshan, V., L. Peng, M. Coram, et al. 2016. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 304:649–656.
- Kumar, A., et al. 2020. OrderRex clinical user testing: A randomized trial of recommender system decision support on simulated cases. Journal of the American Medical Informatics Association.
- Lauret, J. 2019. Amazon’s sexist AI recruiting tool: How did it go so wrong? https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e (accessed September 1, 2020).
- Obermeyer, Z., B. Powers, C. Vogeli, and S. Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 (6464):447–453.
- Schwartz, B. O. 2019. In 2016, Microsoft’s racist chatbot revealed the dangers of online conversation. https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation (accessed September 1, 2020).
- Silver, D. T., J. Hubert, I. Schrittwieser, et al. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362 (6419):1140–1144.
- Stuart-Ulin, C. R. 2018. Microsoft’s Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse. https://qz.com/1340990/microsofts-politically-correct-chat-bot-is-even-worse-than-its-racist-one/ (accessed September 1, 2020).
- Wang, J. K., J. Holm, S. Balasubramanian, et al. 2018. An evaluation of clinical order patterns machine-learned from clinician cohorts stratified by patient mortality outcomes. Journal of Biomedical Informatics 86:109–119.