References
- Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93–117. doi:10.1177/0162243915606523.
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. doi:10.1177/1461444816676645. Retrieved from <Go to ISI>://WOS:000429899100008.
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., … Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59–64. doi:10.1038/s41586-018-0637-6.
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
- Bruun Jensen, C. (2010). Asymmetries of knowledge: Mediated ethnography and ICT for development. Methodological Innovations Online, 5(1), 72–85. doi:10.4256/mio.2010.0011. Retrieved from https://journals.sagepub.com/doi/abs/10.4256/mio.2010.0011
- Buolamwini, J. A. (2017). Gender shades: Intersectional phenotypic and demographic evaluation of face datasets and gender classifiers (Massachusetts Institute of Technology).
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. doi:10.1177/2053951715622512.
- Callon, M., & Law, J. (1997). After the individual in society: Lessons on collectivity from science, technology and society. The Canadian Journal of Sociology / Cahiers canadiens de sociologie, 22(2), 165–182. doi:10.2307/3341747. Retrieved from http://www.jstor.org/stable/3341747
- Charmaz, K. (2014). Constructing grounded theory. Thousand Oaks, CA: Sage.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. doi:10.1089/big.2016.0047. Retrieved from https://www.liebertpub.com/doi/pdf/10.1089/big.2016.0047
- Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2), 2053951717718855.
- Crawford, K. (2016). Artificial intelligence's white guy problem. New York: New York Times Company.
- Crawford, K., & Joler, V. (2018). Anatomy of an AI system: The Amazon Echo as an anatomical map of human labor, data and planetary resources. AI Now Institute and Share Lab.
- Davis, J. L., & Chouinard, J. B. (2016). Theorizing affordances: From request to refuse. Bulletin of Science, Technology & Society, 36(4), 241–248. doi:10.1177/0270467617714944. Retrieved from http://journals.sagepub.com/doi/abs/10.1177/0270467617714944
- Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness.
- Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor (1st ed.). New York, NY: St. Martin's Press.
- Evans, S. K., Pearce, K. E., Vitak, J., & Treem, J. W. (2017). Explicating affordances: A conceptual framework for understanding affordances in communication research. Journal of Computer-Mediated Communication, 22(1), 35–52. doi: 10.1111/jcc4.12180
- Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact.
- Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. doi:10.1023/B:MIND.0000035461.63578.9d. Retrieved from <Go to ISI>://WOS:000222799900004.
- Funk, C., & Parker, K. (2018). Women and men in STEM often at odds over workplace equity. Retrieved from https://www.pewsocialtrends.org/2018/01/09/diversity-in-the-stem-workforce-varies-widely-across-jobs/
- Gibson, J. (2014). The ecological approach to visual perception: Classic edition. New York: Psychology Press.
- Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences.
- Harding, S. G. (2004). The feminist standpoint theory reader: Intellectual and political controversies. New York: Psychology Press.
- Harding, S. G., Grewal, I., Kaplan, C., & Wiegman, R. (2008). Sciences from below: Feminisms, postcolonialities, and modernities. Durham, NC: Duke University Press.
- Holstein, K., Vaughan, J. W., Daume, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Chi 2019: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems. doi:10.1145/3290605.3300830. Retrieved from <Go to ISI>://WOS:000474467907057.
- Kitchin, R. (2017). Thinking critically about and researching algorithms. Information Communication & Society, 20(1), 14–29. doi:10.1080/1369118x.2016.1154087. Retrieved from <Go to ISI>://WOS:000386299500002.
- Krefting, L. (1991). Rigor in qualitative research: The assessment of trustworthiness. American Journal of Occupational Therapy, 45(3), 214–222.
- Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science, 65(7), 2966–2981. doi:10.1287/mnsc.2018.3093. Retrieved from <Go to ISI>://WOS:000475704700002.
- Latour, B. (1988). The pasteurization of France. Cambridge, Mass.: Harvard University Press.
- Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. New York: Oxford University Press.
- Latour, B., & Porter, C. (1993). We have never been modern.
- Letter to Google C.E.O. (2018). Retrieved from https://static01.nyt.com/files/2018/technology/googleletter.pdf
- Mackenzie, A. (2018). From API to AI: Platforms and their opacities. Information, Communication & Society, 22(13), 1–18.
- Moseley, A. (2011). Just war theory. The Encyclopedia of Peace Psychology.
- Murphy, R. R., & Woods, D. D. (2009). Beyond Asimov: The three laws of responsible robotics. IEEE Intelligent Systems, 24(4), 14–20. doi:10.1109/MIS.2009.69.
- Nagy, P., & Neff, G. (2015). Imagined affordance: Reconstructing a keyword for communication theory. Social Media + Society, 1(2), 2056305115603385. doi:10.1177/2056305115603385. Retrieved from http://journals.sagepub.com/doi/abs/10.1177/2056305115603385
- Neyland, D. (2016). Bearing account-able witness to the ethical algorithmic system. Science Technology & Human Values, 41(1), 50–76. doi:10.1177/0162243915598056. Retrieved from <Go to ISI>://WOS:000365740700003.
- Norman, D. A. (1988). The psychology of everyday things. New York: Basic Books.
- Office of the Chief Scientist, A. (2016). Australia's STEM workforce: science, technology, engineering and mathematics.
- O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing.
- Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.
- Reddy, E., Cakici, B., & Ballestero, A. (2019). Beyond mystery: Putting algorithmic accountability in context. Big Data & Society, 6(2), 1. doi:10.1177/2053951719863500. Retrieved from <Go to ISI>://WOS:000474228200001.
- Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Upper Saddle River, N.J: Prentice Hall.
- Sayes, E. (2014). Actor–network theory and methodology: Just what does it mean to say that nonhumans have agency? Social Studies of Science, 44(1), 134–149. doi:10.1177/0306312713511867. Retrieved from http://journals.sagepub.com/doi/abs/10.1177/0306312713511867
- Schraube, E. (2009). Technology as materialized action and its ambivalences. Theory & Psychology, 19(2), 296–312. doi:10.1177/0959354309103543. Retrieved from <Go to ISI>://WOS:000265235200010.
- Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 205395171773810. doi:Unsp 205395171773810410.1177/2053951717738104. Retrieved from <Go to ISI>://WOS:000415052700001.
- Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information Communication & Society, 22(5), 648–663. doi:10.1080/1369118x.2019.1568515. Retrieved from <Go to ISI>://WOS:000462285600006.
- Tavory, I., & Timmermans, S. (2014). Abductive analysis: Theorizing qualitative research. Chicago: University of Chicago Press.
- Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making.
- West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI.