1,312
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness

ORCID Icon, , , , &
Pages 1762-1788 | Received 05 Oct 2021, Accepted 11 Apr 2022, Published online: 04 May 2022

References

  • Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. arXiv preprint arXiv:1803.02453
  • Ahn, Y., & Lin, Y. R. (2020). Fairsight: Visual analytics for fairness in decision making. IEEE Transactions on Visualization and Computer Graphics, 26(1), 1086–1095. https://doi.org/10.1109/TVCG.2019.2934262
  • Barocas, S., Hardt, M., Narayanan, A. (2019). Fairness and machine learning. fairml-book.org. http://www.fairmlbook.org
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. http://www.jstor.org/stable/24758720
  • Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, N., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., & Zhang, Y. (2018, October). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. Retrieved from https://arxiv.org/abs/1810.01943
  • Berman, J. J., Murphy-Berman, V., & Singh, P. (1985). Cross-cultural similarities and differences in perceptions of fairness. Journal of Cross-Cultural Psychology, 16(1), 55–67. https://doi.org/10.1177/0022002185016001005
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems (p. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3173951
  • Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., & Walker, K. (2020, May). Fairlearn: A toolkit for assessing and improving fairness in AI (Tech. Rep. No. MSR-TR-2020-32). Microsoft. Retrieved from https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/
  • Blake, P. R., McAuliffe, K., Corbit, J., Callaghan, T. C., Barry, O., Bowie, A., Kleutsch, L., Kramer, K. L., Ross, E., Vongsachang, H., Wrangham, R., & Warneken, F. (2015). The ontogeny of fairness in seven societies. Nature, 528(7581), 258–261. https://doi.org/10.1038/nature15703
  • Bolton, L. E., Keh, H. T., & Alba, J. W. (2010). How do price fairness perceptions differ across culture? Journal of Marketing Research, 47(3), 564–576. https://doi.org/10.1509/jmkr.47.3.564
  • Cabrera, A. A., Epperson, W., Hohman, F., Kahng, M., Morgenstern, J., & Chau, D. H. (2019). Fairvis: Visual analytics for discovering intersectional bias in machine learning. In 2019 IEEE Conference on Visual Analytics Science and Technology (Vast) (pp. 46–56). https://doi.org/10.1109/VAST47406.2019.8986948
  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230
  • Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K. N., & Varshney, K. R. (2017). Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems (pp. 3992–4001).
  • Cheng, H.-F., Stapleton, L., Wang, R., Bullock, P., Chouldechova, A., Wu, Z. S. S., & Zhu, H. (2021). Soliciting Stakeholders’ fairness notions in child maltreatment predictive systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–17). Association for Computing Machinery.
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
  • Clarke, V., Braun, V., & Hayfield, N. (2015). Thematic analysis. In Qualitative psychology: A practical guide to research methods (pp. 222–248). SAGE Publications.
  • Cook, K. S., & Hegtvedt, K. A. (1983). Distributive justice, equity, and equality. Annual Review of Sociology, 9(1), 217–241. https://doi.org/10.1146/annurev.so.09.080183.001245
  • Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning.
  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining (pp. 797–806). Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
  • Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K. E., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (p. 275–285). Association for Computing Machinery. https://doi.org/10.1145/3301275.3302310
  • Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of The 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). Association for Computing Machinery. https://doi.org/10.1145/2090236.2090255
  • Eckhoff, T. (1974). Justice: Its determinants in social interaction. Rotterdam University Press.
  • Equality & Commission, H. R. (2020). Protected characteristics. Retrieved from https://www.equalityhumanrights.com/en/equality-act/protected-characteristics
  • Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
  • Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining (pp. 2125–2126). Association for Computing Machinery. https://doi.org/10.1145/2939672.2945386
  • Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (pp. 3315–3323).
  • Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Peter A. Hancock and Najmedin Meshkati (Eds.), Advances in psychology (Volume 52, pp. 139–183). North-Holland. http://www.sciencedirect.com/science/article/pii/S0166411508623869
  • Hegtvedt, K. A. (2005). Doing justice to the group: Examining the roles of the group in justice research. Annual Review of Sociology, 31(1), 25–45. https://doi.org/10.1146/annurev.soc.31.041304.122213
  • Hofstede, G. (2011). Dimensionalizing cultures: The Hofstede model in context. Online Readings in Psychology and Culture, 2(1), 2307–0919. https://doi.org/10.9707/2307-0919.1014
  • Hohman, F., Head, A., Caruana, R., DeLine, R., & Drucker, S. M. (2019). Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300809
  • Jeff Larson, L. K., Surya, M., Angwin, J. (2016, May). How we analyzed the compas recidivism algorithm. Retrieved from https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  • Kamiran, F., Calders, T. (2009). Classifying without discriminating. In 2009 2nd international conference on computer, control and communication (pp. 1–6).
  • Kamishima, T., Akaho, S., Sakuma, J. (2011). Fairness-aware learning through regularization approach. In 2011 IEEE 11th international conference on data mining workshops (pp. 643–650).
  • Kasinidou, M., Kleanthous, S., Barlas, P., & Otterbacher, J. (2021). I agree with the decision, but they didn’t deserve this: Future developers’ perception of fairness in algorithmic decisions. In Proceedings of the 2021 acm conference on fairness, accountability, and transparency (pp. 690–700).
  • Kearns, M., Neel, S., Roth, A., Wu, Z. S. (2018). Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International conference on machine learning (pp. 2564–2572).
  • Kim, T.-Y., & Leung, K. (2007). Forming and reacting to overall fairness: A cross-cultural comparison. Organizational Behavior and Human Decision Processes, 104(1), 83–95. https://doi.org/10.1016/j.obhdp.2007.01.004
  • Kulesza, T., Burnett, M., Wong, W.-K., & Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 126–137). ACM. https://doi.org/10.1145/2678025.2701399
  • Kulesza, T., Stumpf, S., Burnett, M., & Kwan, I. (2012). Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the sigchi conference on human factors in computing systems (pp. 1–10). Association for Computing Machinery. https://doi.org/10.1145/2207676.2207678
  • Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. In Advances in neural information processing systems (pp. 4066–4076).
  • Lee, M. K., Jain, A., Cha, H. J., Ojha, S., Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. In Proceedings of the ACM on Human-Computer Interaction, 3(Cscw), 1–26.
  • Lee, M. K., Kim, J. T., & Lizarondo, L. (2017). A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In Proceedings of the 2017 Chi conference on human factors in computing systems (pp. 3365–3376). Association for Computing Machinery. https://doi.org/10.1145/3025453.3025884
  • Lim, B. Y., & Dey, A. K. (2009). Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing (pp. 195–204). ACM. https://doi.org/10.1145/1620545.1620576
  • Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In I. Guyon (Eds.), Advances in neural information processing systems (Vol. 30). Curran Associates, Inc. https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
  • Mallari, K., Inkpen, K., Johns, P., Tan, S., Ramesh, D., & Kamar, E. (2020). Do I look like a criminal? Examining how race presentation impacts human judgement of recidivism. In Proceedings of the 2020 Chi conference on human factors in computing systems (p. 1–13). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376257
  • Mattila, A. S., & Choi, S. (2006). A cross-cultural comparison of perceived fairness and satisfaction in the context of hotel room pricing. International Journal of Hospitality Management, 25(1), 146–153. https://doi.org/10.1016/j.ijhm.2004.12.003
  • Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2018). Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867.
  • Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. In Proceeding Conference fairness accountability transport, New York, USA 2, 6–2.
  • Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2(13), 13. https://doi.org/10.3389/fdata.2019.00013
  • Rawls, J. (1958). Justice as fairness. The Philosophical Review, 67(2), 164–194. https://doi.org/10.2307/2182612
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM sigkdd international conference on knowledge discovery and data mining (pp. 1135–1144). Association for Computing Machinery. https://doi.org/10.1145/2939672.2939778
  • Saxena, N. A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D. C., & Liu, Y. (2020). How do fairness definitions fare? testing public attitudes towards three algorithmic definitions of fairness in loan allocations. Artificial Intelligence, 283, 103238. https://doi.org/10.1016/j.artint.2020.103238
  • Shneiderman, B. (2020, April). Human-centered artificial intelligence: Reliable, safe & trust-worthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • Shneiderman, B. (2021). Responsible AI: Bridging from ethics to practice. Communications of the ACM, 64(8), 32–35. https://doi.org/10.1145/3445973
  • Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 Chi Conference on human factors in computing systems (p. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3174014
  • Verma, S., & Rubin, J. (2018). Fairness definitions explained. In 2018 IEEE/aCM international workshop on software fairness (fairware) (pp. 1–7). https://doi.org/10.1145/3194770.3194776
  • Wang, R., Harper, F. M., & Zhu, H. (2020). Factors inuencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 chi conference on human factors in computing systems (pp. 1–14). Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3313831.3376813
  • Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., & Wilson, J. (2020). The what-if tool: Interactive probing of machine learning models. IEEE Transactions on Visualization and Computer Graphics, 26(1), 56–65. https://doi.org/10.1109/TVCG.2019.2934619
  • Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 chi conference on human factors in computing systems (pp. 1–14).
  • Yan, J. N., Gu, Z., Lin, H., & Rzeszotarski, J. M. (2020). Silva: Interactively assessing machine learning fairness using causality. In Proceedings of the 2020 chi conference on human factors in computing systems (pp. 1–13). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3313831.3376447
  • Yang, F., Cisse, M., & Koyejo, O. O. (2020). Fairness with overlapping groups; a probabilistic perspective. Advances in Neural Information Processing Systems, 33. https://proceedings.neurips.cc/paper/2020/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html
  • Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C. (2013). Learning fair representations. In International conference on machine learning (pp. 325–333).
  • Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 aaai/acm conference on ai, ethics, and society (pp. 335–340).
  • Zheng, X., Aragam, B., Ravikumar, P. K., & Xing, E. P. (2018). Dags with no tears: Continuous optimization for structure learning. In Advances in neural information processing systems (pp. 9472–9483).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.