17,605
Views
30
CrossRef citations to date
0
Altmetric
Literature Reviews

Algorithmic bias: review, synthesis, and future research directions

ORCID Icon & ORCID Icon
Pages 388-409 | Received 25 Apr 2020, Accepted 29 Apr 2021, Published online: 06 Jun 2021

References

  • Ajunwa, I. (2020). The paradox of automation as anti-bias intervention. Cardozo L. Review, 41(5), 1671. https://cardozolawreview.com/the-paradox-of-automation-as-anti-bias-intervention/
  • Almada, M. (2019). Human intervention in automated decision-making: Toward the construction of contestable systems. Proceedings of the seventeenth international conference on artificial intelligence and law, Montreal, QC, Canada.
  • Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
  • Arnott, D., & Gao, S. (2019). Behavioral economics for decision support systems researchers. Decision Support Systems, 122, 113063. https://doi.org/10.1016/j.dss.2019.05.003
  • Arrighetti, A., Bachmann, R., & Deakin, S. (1997). Contract law, social norms and inter-firm cooperation. Cambridge Journal of Economics, 21(2), 171–195. https://doi.org/10.1093/oxfordjournals.cje.a013665
  • Babuta, A., & Oswald, M. (2019). Data analytics and algorithmic bias in policing. The royal united services institute for defence and security studies. Royal United Services Institute for Defence and Security Studies.
  • Barlas, P., Kleanthous, S., Kyriakou, K., & Otterbacher, J. (2019). What makes an image tagger fair? Proceedings of the 27th ACM conference on user modeling, adaptation and personalization, Larnaca, Cyprus.
  • Barocas, S., & Selbst, A. D. (2016). Big Data’s disparate impact. Social Science Research Network, 62. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899#
  • Bauer, N. M. (2015). Emotional, sensitive, and unfit for office? Gender stereotype activation and support female candidates. Political Psychology, 36(6), 691–708. https://doi.org/10.1111/pops.12186
  • Binns, R. (2018). What can political philosophy teach us about algorithmic fairness? IEEE Security & Privacy, 16(3), 73–80. https://doi.org/10.1109/MSP.2018.2701147
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada.
  • Blasi, A., Kurtines, W., & Gewirtz, J. (1994). Moral identity: Its role in moral functioning. Fundamental Research in Moral Development, 2, 168–179.
  • Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International Journal of Law and Information Technology, 27(2), 91–121. https://doi.org/10.1093/ijlit/eay017
  • Brown, A., Chouldechova, A., Putnam-Hornstein, E., Tobin, A., & Vaithianathan, R. (2019). Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. Proceedings of the 2019 CHI conference on human factors in computing systems, Glasgow, UK.
  • Bulutlar, F., & Öz, E. Ü. (2009). The effects of ethical climates on bullying behaviour in the workplace. Journal of Business Ethics, 86(3), 273–295. https://doi.org/10.1007/s10551-008-9847-4
  • Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly, 36(4), 4. https://doi.org/10.2307/41703503
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
  • Coates, D., & Martin, A. (2019). An instrument to evaluate the maturity of bias governance capability in artificial intelligence projects. IBM Journal of Research and Development, 63(4/5), 7: 1–7: 15. https://doi.org/10.1147/JRD.2019.2915062
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386. https://doi.org/10.1037/0021-9010.86.3.386
  • Colquitt, J. A., & Rodell, J. B. (2015). Measuring justice and fairness. In The Oxford handbook of justice in the workplace (Vol. 1, pp. 187–202).  Oxford University Press.
  • Corley, K. G., & Gioia, D. A. (2011). Building theory about theory building: What constitutes a theoretical contribution? Academy of Management Review, 36(1), 12–32. https://doi.org/10.5465/amr.2009.0486
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. IJCAI.
  • Dickinger, A., Arami, M., & Meyer, D. (2008). The role of perceived enjoyment and social norm in the adoption of technology with network externalities. European Journal of Information Systems, 17(1), 4–11. https://doi.org/10.1057/palgrave.ejis.3000726
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
  • Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. Proceedings of the 24th international conference on intelligent user interfaces. Marina del Ray, CA, USA.
  • Dohi, I., & Fooladi, M. M. (2008). Individualism as a solution for gender equality in Japanese society in contrast to the social structure in the United States. Forum on Public Policy.
  • Domanski, R. (2019). The AI pandorica: linking ethically-challenged technical outputs to prospective policy approaches. Proceedings of the 20th annual international conference on digital government research, Dubai, UAE.
  • Draude, C., Klumbyte, G., Lücking, P., & Treusch, P. (2019). Situated algorithms: A sociotechnical systemic approach to bias. Online Information Review, 44(2), 325–342. https://doi.org/10.1108/OIR-10-2018-0332
  • Ebrahimi, S., & Hassanein, K. (2019). Empowering users to detect data analytics discriminatory recommendations. Proceedings of the 40th International Conference on Information Systems, Munich, Germany.
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a ‘Right to an Explanation’ Is probably not the remedy you are looking for. Social Science Research Network, 67. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855
  • Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Conference on fairness, accountability and transparency, New York City, NY, USA.
  • Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12. https://doi.org/10.1186/s40537-019-0177-4
  • Fiske, S. T. (1998). Stereotyping, prejudice, and discrimination. The Handbook of Social Psychology, 2(4), 357–411. McGraw-Hill. https://psycnet.apa.org/record/1998-07091-025
  • Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878. https://doi.org/10.1037/0022-3514.82.6.878
  • Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2018). Predictably unequal? The effects of machine learning on credit markets. Social Science Research Network, 94. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3072038
  • Gal, U., Jensen, T. B., & Stein, M.-K. (2017). People analytics in the age of big data: An agenda for IS research. ICIS 2017: Transforming Society with Digital Innovation. Proceedings of the 38th International Conference on Information Systems, Seoul, South Korea.
  • Garcia, M. (2016). Racist in the machine: The disturbing implications of algorithmic bias. World Policy Journal, 33(4), 111–117. https://doi.org/10.1215/07402775-3813015
  • Ghasemaghaei, M., & Hassanein, K. (2019). Dynamic model of online information quality perceptions and impacts: A literature review. Behaviour & Information Technology, 38(3), 302–317. https://doi.org/10.1080/0144929X.2018.1531928
  • Glick, P. (2006). Ambivalent sexism, power distance, and gender inequality across cultures. In S. Guimond (Ed.), Social Comparison and Social Psychology: Understanding Cognition, Intergroup Relations, and Culture, 283. Cambridge University Press https://doi.org/10.1017/CBO9780511584329.015
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16(2), 399–432. https://doi.org/10.1177/014920639001600208
  • Grgić-Hlača, N., Engel, C., & Gummadi, K. P. (2019). Human decision making with machine assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–25. https://doi.org/10.1145/3359280
  • Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. Proceedings of the 2018 World Wide Web conference.
  • Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2018). Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. Thirty-Second AAAI conference on artificial intelligence, Lyon, France.
  • Haas, C. (2019). The price of fairness-A framework to explore trade-offs in algorithmic fairness. ICIS.
  • Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3164973
  • Hamilton, I. A. (2018). Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women. Business Insider. Retrieved November 11 from https://www.businessinsider.com/amazon-ai-biased-against-women-no-surprise-sandra-wachter–2018–10
  • Harper, F. M., Xu, F., Kaur, H., Condiff, K., Chang, S., & Terveen, L. (2015). Putting users in control of their recommendations. Proceedings of the 9th ACM conference on recommender systems, Vienna, Austria.
  • Heaven, W. D. (2020, August 5, 2020). The UK is dropping an immigration algorithm that critics say is racist. MIT Technology Review. https://www.technologyreview.com/2020/08/05/1006034/the-uk-is-dropping-an-immigration-algorithm-that-critics-say-is-racist/
  • Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI conference on human factors in computing systems, Glasgow, UK.
  • Huq, A. Z. (2018). Racial equity in algorithmic criminal justice. Duke LJ, 68(6), 1043. https://scholarship.law.duke.edu/dlj/vol68/iss6/1/
  • Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. ECIS. Proceedings of the 28th European Conference on Information Systems, virtual.
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA Law Review, 66, 54. https://www.uclalawreview.org/private-accountability-age-algorithm/
  • Khalil, A., Ahmed, S. G., Khattak, A. M., & Al-Qirim, N. (2020). Investigating bias in facial analysis systems: A systematic review. IEEE Access, 8, 130751–130761. https://doi.org/10.1109/ACCESS.2020.3006051
  • Kim, P. T. (2017). Auditing algorithms for discrimination. U. Pa. Law Review Online, 166, 189. https://www.pennlawreview.com/2017/12/12/auditing-algorithms-for-discrimination/
  • Kinzig, A. P., Ehrlich, P. R., Alston, L. J., Arrow, K., Barrett, S., Buchman, T. G., Daily, G. C., Levin, B., Levin, S., & Oppenheimer, M. (2013). Social norms and global environmental challenges: The complex interaction of behaviors, values, and policy. Bio Science, 63(3), 164–175. https://doi.org/10.1525/bio.2013.63.3.5
  • Koene, A. (2017). Algorithmic bias: Addressing growing concerns [leading edge]. IEEE Technology and Society Magazine, 36(2), 31–32. https://doi.org/10.1109/MTS.2017.2697080
  • Koene, A., Dowthwaite, L., & Seth, S. (2018). IEEE P7003™ standard for algorithmic bias considerations: Work in progress paper. Proceedings of the international workshop on software fairness, Gothenburg, Sweden.
  • Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Review, 165(3), 633. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3/
  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. biometrics, 159–174.
  • Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science, 65(7), 2966–2981. https://doi.org/10.1287/mnsc.2018.3093
  • Lee, M. (2018a). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 2053951718756684. https://doi.org/10.1177/2053951718756684
  • Lee, M., Jain, A., Cha, H., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–26.
  • Lee, M., Kim, J., & Lizarondo, L. (2017). A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. Proceedings of the 2017 CHI conference on human factors in computing systems, Denver, CO, USA.
  • Lee, M., Kusbit, D., Kahng, A., Kim, J., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., & Psomas, A. (2019). WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), Artcile 181, 1–35. https://doi.org/10.1145/3359283
  • Lee, N. (2018b). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society 16, 252–260. https://doi.org/10.1108/JICES-06-2018-0056
  • Lin, K., Sonboli, N., Mobasher, B., & Burke, R. (2019). Crank up the volume: Preference bias amplification in collaborative recommendation. RMSE workshop (in conjunction with the 13th ACM conference on Recommender Systems (RecSys)), Copenhagen, Denmark.
  • Lind, E. A., Lissak, R. I., & Conlon, D. E. (1983). Decision control and process control effects on procedural fairness judgments 1. Journal of Applied Social Psychology, 13(4), 338–350. https://doi.org/10.1111/j.1559-1816.1983.tb01744.x
  • Loi, M., Heitz, C., Ferrario, A., Schmid, A., & Christen, M. (2019). Towards an ethical code for data-based business. 2019 6th Swiss Conference on Data Science (SDS), Bern, Switzerland.
  • Lysaght, T., Lim, H. Y., Xafis, V., & Ngiam, K. Y. (2019). AI-assisted decision-making in healthcare. Asian Bioethics Review, 11(3), 299–314. https://doi.org/10.1007/s41649-019-00096-0
  • Martin, K. (2019). Designing ethical algorithms. MIS Quarterly Executive, 18(5), 2. https://aisel.aisnet.org/misqe/vol18/iss2/5/
  • McFarlin, D. B., & Sweeney, P. D. (1992). Distributive and procedural justice as predictors of satisfaction with personal and organizational outcomes. Academy of Management Journal, 35(3), 626–637. https://doi.org/10.2307/256489
  • Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 12. https://doi.org/10.1145/1985347.1985353
  • Mehrabian, A., & Russell, J. A. (1974). An approach to environmental psychology. the MIT Press.
  • Mikalef, P., Pappas, I. O., Krogstie, J., & Giannakos, M. (2018). Big data analytics capabilities: A systematic literature review and research agenda. Information Systems and e-Business Management, 16(3), 547–578. https://doi.org/10.1007/s10257-017-0362-y
  • Mulligan, D. K., Kroll, J. A., Kohli, N., & Wong, R. Y. (2019). This thing called fairness: disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–36. https://doi.org/10.1145/3359221
  • O’neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
  • Pereira, C., Vala, J., & Costa‐Lopes, R. (2010). From prejudice to discrimination: The legitimizing role of perceived threat in discrimination against immigrants. European Journal of Social Psychology, 40(7), 1231–1250. https://doi.org/10.1002/ejsp.718
  • Perez Vallejos, E., Koene, A., Portillo, V., Dowthwaite, L., & Cano, M. (2017). Young people’s policy recommendations on algorithm fairness. Proceedings of the 2017 ACM on Web Science Conference,New York City, NY, USA.
  • Petter, S., DeLone, W., & McLean, E. R. (2013). Information systems success: The quest for the independent variables. Journal of Management Information Systems, 29(4), 7–62. https://doi.org/10.2753/MIS0742-1222290401
  • Picoto, W. N., Bélanger, F., & Palma-dos-reis, A. (2014). An organizational perspective on m-business: Usage factors and value determination. European Journal of Information Systems, 23(5), 571–592. https://doi.org/10.1057/ejis.2014.15
  • PYMNTS. (2018). In the age of algorithms, will banks ever graduate to true AI? New York City, NY, USA. Retrieved April 5 from https://www.pymnts.com/news/artificial-intelligence/2018/bank-technology-true-ai-machine-learning/
  • Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. Proceedings of the 2018 CHI conference on human factors in computing systems,Montreal, QC, Canada.
  • Ransbotham, S., Fichman, R. G., Gopal, R., & Gupta, A. (2016). Special section introduction—Ubiquitous IT and digital vulnerabilities. Information Systems Research, 27(4), 834–847. https://doi.org/10.1287/isre.2016.0683
  • Rantavuo, H. (2019). Designing for intelligence: User-centred design in the age of algorithms. Proceedings of the 5th International ACM in-cooperation HCI and UX conference,Jakarta Surabaya, Bali, Indonesia.
  • Rawls, J. (2001). Justice as fairness: A restatement. Harvard University Press.
  • Rhue, L. (2019). Beauty’s in the AI of the beholder: How AI anchors subjective and objective predictions. international conference on information systems, Munich, Germany.
  • Robert, L., Pierce, C., Morris, L., Kim, S., & Alahmad, R. (2020). Designing Fair AI for Managing Employees in Organizations: A Review. Human-Computer Interaction, 35(5-6), 545-575.  https://doi.org/10.1080/07370024.2020.1735391.
  • Rossi, F. (2019). Building trust in artificial intelligence. Journal of International Affairs, 72(1), 127–134. https://www.jstor.org/stable/26588348
  • Rudman, L. A., & Kilianski, S. E. (2000). Implicit and explicit attitudes toward female authority. Personality & Social Psychology Bulletin, 26(11), 1315–1328. https://doi.org/10.1177/0146167200263001
  • Saltz, J. S., & Dewar, N. (2019). Data science ethical considerations: A systematic literature review and proposed project framework. Ethics and Information Technology, 21(3), 197–208. https://doi.org/10.1007/s10676-019-09502-5
  • Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Presented to Data and Discrimination: Converting Critical Concerns into Productive Inquiry, A Preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA, USA. https://www.semanticscholar.org/paper/Auditing-Algorithms-%3A-Research-Methods-for-on-Sandvig-Hamilton/b7227cbd34766655dea10d0437ab10df3a127396
  • Saxena, N.-A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D. C., & Liu, Y. (2019). How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, Honolulu, HI, USA.
  • Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied Psychology: An International Review, 48(1), 23–47. https://doi.org/10.1111/j.1464-0597.1999.tb00047.x
  • Shaw, J., Rudzicz, F., Jamieson, T., & Goldfarb, A. (2019). Artificial intelligence and the implementation challenge. Journal of Medical Internet Research, 21(7), e13659. https://doi.org/10.2196/13659
  • Shen, K. N., & Khalifa, M. (2012). System design effects on online impulse buying. Internet Research, 22(4), 396–425. https://doi.org/10.1108/10662241211250962
  • Shrestha, Y. R., & Yang, Y. (2019). Fairness in algorithmic decision-making: Applications in multi-winner voting, machine learning, and recommender systems. Algorithms, 12(9), 199. https://doi.org/10.3390/a12090199
  • Silva, S., & Kenney, M. (2019). Algorithms, platforms, and ethnic bias. Communications of the ACM, 62(11), 37–39. https://doi.org/10.1145/3318157
  • Skarlicki, D. P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive, procedural, and interactional justice. Journal of Applied Psychology, 82(3), 434. https://doi.org/10.1037/0021-9010.82.3.434
  • Someh, I., Davern, M., Breidbach, C. F., & Shanks, G. (2019). Ethical issues in big data analytics: A stakeholder perspective. Communications of the Association for Information Systems, 44(1), 34. https://doi.org/10.17705/1CAIS.04434
  • Springer, A., & Whittaker, S. (2019). Making transparency clear: The dual importance of explainability and auditability. IUI Workshops.
  • Swim, J. K., Aikin, K. J., Hall, W. S., & Hunter, B. A. (1995). Sexism and racism: Old-fashioned and modern prejudices. Journal of Personality and Social Psychology, 68(2), 199. https://doi.org/10.1037/0022-3514.68.2.199
  • Thorbecke, C. (2019). New York probing Apple Card for alleged gender discrimination after viral tweet. ABC News. Retrieved february/22/2020 from https://abcnews.go.com/US/york-probing-apple-card-alleged-gender-discrimination-viral/story?id=66910300
  • Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada.
  • Venkatesh, V., Thong, T., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
  • Verma, S., & Rubin, J. (2018). Fairness definitions explained. 2018 IEEE/ACM international workshop on software fairness (FairWare), Gothenburg, Sweden.
  • Webb, H., Koene, A., Patel, M., & Vallejos, E. P. (2018). Multi-stakeholder dialogue for policy recommendations on algorithmic fairness. Proceedings of the 9th international conference on social media and society.
  • Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. MIS Quarterly, 26(2). https://www.jstor.org/stable/4132319.
  • Wells, D., & Spinoni, E. (2019). Western Europe Big Data and analytics software forecast, 2018–2023. International Data Corporation. Retrieved April 5 from https://www.idc.com/getdoc.jsp?containerId=EUR145601519
  • Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. Proceedings of the 2020 conference on fairness, accountability, and transparency, Barcelona, Spain.
  • Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8, 78–115. https://doi.org/10.5325/jinfopoli.8.2018.0078
  • Wong, P.-H. (2019). Democratizing Algorithmic Fairness. Philosophy & Technology 33, 225–244. http://doi.org/10.1007/s13347-019-00355-w
  • Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of algorithmic fairness. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada.
  • Yapo, A., & Weiss, J. (2018). Ethical implications of bias in machine learning. Proceedings of Hawaii international confernece on system sciences, Waikoloa, HI, USA.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.