998
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time to Regulate?

ORCID Icon & ORCID Icon
 

ABSTRACT

This article looks at the risks and advantages of early regulation of artificial intelligence (AI) from an international human rights (IHR) angle. By exploring arguments from scholarly and policy papers from various jurisdictions on possible approaches to regulating AI, the authors identify a current trend among states to wait rather than proactively regulate. The authors challenge the idea that there is a reasonable or legitimate case to ‘wait and see’. The article presents three examples of AI systems, using a women's rights lens, in order to outline the IHR implications of AI that states will have to grapple with in the years to come as AI usage grows. This article illustrates that the absence of adequate regulation in the AI domain may itself be a violation of IHR norms, reflecting a state of play where governments have relinquished their obligations to protect, fulfil, and remedy. However, given the limited likelihood that regulatory actions will occur in the short term, in the interim, the IHR monitoring framework should require states to systematically assess and report on their readiness to deal with the human rights risks of AI systems, in order to limit such risks and identify the longer-term needs of regulation.

Disclosure Statement

No potential conflict of interest was reported by the authors.

Notes

1 Glion Human Rights Dialogue, ‘Human Rights in the Digital Age: Making Digital Technology Work for Human Rights’ (Universal Rights Group 2020) 21 <www.universal-rights.org/urg-policy-reports/human-rights-in-the-digital-age-making-digital-technology-work-for-human-rights/>. All online source URLs were last access on 20 April 2022.

2 Ibid.

3 European Union Agency for Fundamental Rights, ‘#BigData: Discrimination in Data-Supported Decision Making’ (2018) <https://fra.europa.eu/en/publication/2018/bigdata-discrimination-data-supported-decision-making>; Filippo A Raso and others, ‘Artificial Intelligence & Human Rights: Opportunities & Risks’ (Social Science Research Network 2018) SSRN Scholarly Paper ID 3259344 <https://papers.ssrn.com/abstract=3259344>; Eileen Donahoe and Megan MacDuffee Metzger, ‘Artificial Intelligence and Human Rights’ (2019) 30 Journal of Democracy 115; Steven Livingston and Mathias Risse, ‘The Future Impact of Artificial Intelligence on Humans and Human Rights’ (2019) 33 Ethics & International Affairs 141; Mathias Risse, ‘Human Rights and Artificial Intelligence: An Urgently Needed Agenda’ (2019) 41 Human Rights Quarterly 1; Rowena Rodrigues, Konrad Siemaszko and Zuzanna Warso, ‘SIENNA D4.2: Analysis of the Legal and Human Rights Requirements for AI and Robotics in and Outside the EU’ (Zenodo 2019) <https://zenodo.org/record/4066812>; A Renda, ‘Europe: Toward a Policy Framework for Trustworthy AI’ in The Oxford Handbook of Ethics of AI (OUP 2020); Karen Yeung, Andrew Howes and Ganna Pogrebna, ‘AI Governance by Human Rights–Centered Design, Deliberation, and Oversight’ in The Oxford Handbook of Ethics of AI (Oxford University Press 2020); OHCHR, ‘A Human-Rights-Based Approach to Data’ <www.ohchr.org/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf>; OHCHR, ‘Artificial Intelligence Ensuring Human Rights at the Heart of the Sustainable Development Goals’ (10 March 2021) <www.ohchr.org/EN/NewsEvents/Pages/ArtificialIntelligence-SDGs.aspx>, to mention just a few.

4 Interestingly, we could not find any examples regarding this debate in official documents in China. Note their absence, for example in State Council of China, ‘China’s New Generation of Artificial Intelligence Development Plan (Non-Official Translation)’ (30 July 2017) <https://flia.org/notice-state-council-issuing-new-generation-artificial-intelligence-development-plan/>; Jinghan Zeng, ‘Artificial Intelligence and China’s Authoritarian Governance’ (2020) 96 International Affairs 1441.

5 Donahoe and Metzger (n 3) 115.

6 For all, see Michael Haenlein and Andreas Kaplan, ‘A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence’ (2019) 61 California Management Review 5.

7 A notable exception is Risse (n 3).

8 Donahoe and Metzger (n 3) 114.

9 Raso and others (n 3) 10; Philip Jansen and others, ‘SIENNA D4.1: State-of-the-Art Review: Artificial Intelligence and Robotics’ (Zenodo 2019) 16 <https://zenodo.org/record/4066571>.

10 Livingston and Risse (n 3) 142.

11 In a report from the relevant EU agency they prefer the term ‘Big Data', see European Union Agency for Fundamental Rights (n 3).

12 European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts 2021, article 3(1). COM/2021/206 final (21 April 2021).

13 A similar idea is offered by the Expert Group on Artificial Intelligence at the OECD, ‘Scoping the OECD AI Principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD’ (OECD 2019) 291 7 <https://read.oecd-ilibrary.org/science-and-technology/scoping-the-oecd-ai-principles_d62f618a-en>.

14 Tobias D Krafft, Katharina A Zweig and Pascal D König, ‘How to Regulate Algorithmic Decision-Making: A Framework of Regulatory Requirements for Different Applications’ (2020) Regulation & Governance 1 <http://onlinelibrary.wiley.com/doi/abs/10.1111/rego.12369>.

15 Christian Tomuschat, Human Rights: Between Idealism and Realism (OUP 2014) 146–47.

17 For a detailed account of the problem see Krafft, Zweig and König (n 14) s 3; for an analysis of the problems this creates to regulate it in general see Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge University Press 2021) ch 2.

18 See a discussion of the Turing test in Chesterman (n 17) 114–15.

19 Jansen and others (n 9) 12.

20 Based on the definition in Raso and others (n 3) 11.

21 Ari Ezra Waldman, ‘Power, Process, and Automated Decision-Making Symposium: Rise of the Machines: Artificial Intelligence, Robotics, and the Reprogramming of Law’ (2019) 88 Fordham Law Review 613, 613.

22 To use the general terminology coined for management in the 1970s. See Bin Fang, ‘Decision Support System (DSS)-Form, Development and Future’, 2009 First International Workshop on Education Technology and Computer Science (7-8 March 2009), Wuhan, Hubei, China.

23 Frank Pasquale, New Laws of Robotics. Defending Human Expertise in the Age of AI (Belknap Press 2020).

24 José Luis González Álvarez, Juan José López Ossorio and Marina Muñoz Rivas, La valoración policial del riesgo de violencia contra la mujer pareja en España – Sistema VioGén (Ministerio del Interior Gobierno de España 2018) 56.

25 Leaving aside the issue of their limited reliability. See American Psychological Association, ‘The Truth About Lie Detectors (Aka Polygraph Tests)’ (www.apa.org, 5 August 2004) <www.apa.org/research/action/polygraph> accessed 20 April 2022.

26 The person needs to have a ‘well-founded fear’, according to article 1(A)(2) Convention relating to the Status of Refugees 1951.

27 On a related note, training and testing such a system to reliably recognise fear would also have serious human rights implications.

28 Mark Latonero, ‘Governing Artificial Intelligence: Upholding Human Rights & Dignity’ (2018) <https://apo.org.au/sites/default/files/resource-files/2018-10/apo-nid196716.pdf>.

29 Yeung, Howes and Pogrebna (n 3).

30 Ibid. 89–90.

31 José-Miguel Bello and Villarino and Henry Fraser, ‘Acceptable “Residual Risks” for Fundamental Rights? Understanding the Keystone of Risk-Based AI Regulations’ (AI: The New Frontier of Business and Human Rights, online conference, September 2021).

32 See a similar argument in Cass R Sunstein, ‘Maximin’ (2020) 37 Yale Journal on Regulation 940.

33 Risse (n 3).

34 Zeyi Miao, ‘Investigation on Human Rights Ethics in Artificial Intelligence Researches with Library Literature Analysis Method’ (2019) 37 The Electronic Library 914.

35 Livingston and Risse (n 3) 151.

36 Steven M Wise, ‘A Great Shout: Legal Rights for Great Apes’ in Animal Rights (Routledge 2008).

37 Peter Singer, ‘Morality, Reason, and the Rights of Animals’ in Primates and Philosophers: How Morality Evolved (Princeton University Press 2009) <www.degruyter.com/document/doi/10.1515/9781400830336-010/html>.

38 Raso and others (n 3); Australian Human Rights Commission, ‘Human Rights and Technology Final Report’ (2021) <https://tech.humanrights.gov.au/sites/default/files/2021-05/AHRC_RightsTech_2021_Final_Report.pdf>.

39 Hin-Yan Liu and Karolina Zawieska, ‘A New Human Rights Regime to Address Robotics and Artificial Intelligence’ [2016] JusLetter IT s 6.

40 An overview can be found in Sheshadri Chatterjee, NS Sreenivasulu and Zahid Hussain, ‘Evolution of Artificial Intelligence and Its Impact on Human Rights: From Sociolegal Perspective’ (2021) ahead-of-print International Journal of Law and Management <https://doi.org/10.1108/IJLMA-06-2021-0156>.

41 ‘AI for Good’ (AI for Good) <https://aiforgood.itu.int/>.

42 Australian Human Rights Commission (n 37) 9.

43 Aleš Završnik, ‘Criminal Justice, Artificial Intelligence Systems, and Human Rights’ (2020) 20 ERA Forum 567.

44 IAP Wogu and others, ‘“Human Rights” Issues and Media/Communication Theories in the Wake of Artificial Intelligence Technologies: The Fate of Electorates in Twenty-First-Century American Politics’ in Thangaprakash Sengodan, M Murugappan and Sanjay Misra (eds), Advances in Electrical and Computer Technologies (Springer 2020).

45 Emilie C Schwarz, ‘Human vs Machine: A Framework of Responsibilities and Duties of Transnational Corporations for Respecting Human Rights in the Use of Artificial Intelligence Notes’ (2019) 58 Columbia Journal of Transnational Law 232.

46 Monika Zalnieriute, Lyria Bennett Moses and George Williams, ‘The Rule of Law and Automation of Government Decision-Making’ (2019) 82 The Modern Law Review 425.

47 Tanmay Randhavane and others, ‘Identifying Emotions from Walking Using Affective and Deep Features’ (2020) <http://arxiv.org/abs/1906.11884>.

48 Not least at the risk of diluting the great gains that have been achieved in terms of human rights for the communities of people living with disability.

49 Adopted and opened for signature, ratification and accession by General Assembly resolution 2200A (XXI) of 16 December 1966 entry into force 23 March 1976, in accordance with Article 49

50 Tomuschat (n 15) 146–47.

51 See General Comment 31, Nature of the General Legal Obligation Imposed on States Parties to the Covenant (UN doc CCPR/C/21/Rev 1/Add 13, 26 May 2004).

52 It is perhaps surprising how little attention – scholarly or popular – Nadia has drawn, given the assistant is voiced by actor Cate Blanchett. Blanchett’s inclusion in the project may have been designed to bring Nadia and Australia notable visibility, yet the choice of a ‘celebrity voice’ does raise questions about Nadia’s evidence-based design, the voice itself being a key component of the AI technology

53 Christopher Knaus, ‘NDIA Denies Cate Blanchett-Voiced “Nadia” Virtual Assistant Is in Doubt’ The Guardian (21 September 2017) <www.theguardian.com/australia-news/2017/sep/22/ndia-denies-cate-blanchett-voiced-nadia-virtual-assistant-is-in-doubt>.

54 Exclusive by political editor Andrew Probyn, ‘Government’s Blanchett-Voiced AI Venture for NDIS Stalls’ ABC News (21 September 2017) <www.abc.net.au/news/2017-09-21/government-stalls-ndis-virtual-assistant-voiced-by-cate-blanchet/8968074>.

55 Roger Smith, ‘Nadia Falters: Teetering Technology in the Service of Access to Justice’ (Law, Technology and Access to Justice, 6 November 2017) <https://law-tech-a2j.org/advice/nadia-falters-teetering-technology-in-the-service-of-access-to-justice/>.

56 Knaus (n 53).

57 Convention on the Rights of Persons with Disabilities 2006, art 2.

58 Ibid. art 21. Stephen Easton, ‘Nadia: The Curious Case of the Digital Missing Person’ [2019] The Mandarin <www.themandarin.com.au/106473-nadia-the-curious-case-of-the-digital-missing-person/>.

59 Jonathon Penney and others, ‘Advancing Human Rights-by-Design in the Dual-Use Technology Industry’ (2018) 20 Columbia Journal of International Affairs <https://digitalcommons.schulichlaw.dal.ca/scholarly_works/250>.

60 Evgeni Aizenberg and Jeroen van den Hoven, ‘Designing for Human Rights in AI’ (2020) 7 Big Data & Society 2053951720949566.

61 Office of the High Commissioner for Human Rights, ‘Gender Stereotyping as a Human Rights Violation’ (14 October 2013), Women's Rights and Gender Section, Office of the High Commissioner for Human Rights, Geneva, Switzerland.

62 Nora Ni Loideain, Rachel Adams and Damian Clifford, ‘Gender as Emotive AI and the Case of “Nadia”: Regulatory and Ethical Implications’ (Social Science Research Network 2021) SSRN Scholarly Paper ID 3858431 6 <https://papers.ssrn.com/abstract=3858431>.

63 Ibid.

64 Rachel Adams and Nóra Ní Loideáin, ‘Addressing Indirect Discrimination and Gender Stereotypes in AI Virtual Personal Assistants: The Role of International Human Rights Law’ (2019) 8 Cambridge International Law Journal 241.

65 UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ para 15 <https://unesdoc.unesco.org/ark:/48223/pf0000380455>.

66 Ibid. para 19.

67 Julie Van Hoey and others, ‘Profile Changes in Male Partner Abuser After an Intervention Program in Gender-Based Violence’ (2021) 65 International Journal of Offender Therapy and Comparative Criminology 1411.

68 Shann Hulme, Anthony Morgan and Hayley Boxall, ‘Domestic Violence Offenders, Prior Offending and Reoffending in Australia’ (2019) (580) Trends and Issues in Crime and Criminal Justice 22.

69 P Randall Kropp and Stephen D Hart, ‘The Spousal Assault Risk Assessment (SARA) Guide: Reliability and Validity in Adult Male Offenders’ (2000) 24 Law and Human Behavior 101, 102.

70 Seventy-two victims in 2004 against an average of 50 victims in the 2016–2020 period: ‘Mujeres – Delegación Del Gobierno Contra La Violencia de Género’ (2021) <https://violenciagenero.igualdad.gob.es/violenciaEnCifras/victimasMortales/fichaMujeres/home.htm>.

71 According to data provided to the press, there has been a 25% decrease in the rates of recidivism since the risk assessment system began operation.

72 The system was created by the Organic Law 1/2004 de Medidas de Protección Integral Contra La Violencia de Género (Integral Protective Measures against Gender-Based Violence) Ley Orgánica 1/2004, de 28 de diciembre, de Medidas de Protección Integral contra la Violencia de Género. 2004 (BOE-A-2004-21760).

73 In 2017, the Committee on the Elimination of All Forms of Discrimination issued a General Recommendation on gender-based violence CEDAW Committee, General recommendation No 35 on gender-based violence against women, updating general recommendation No 19 2017 [UN Doc No CEDAW /C/GC/35], the third and most recent of the Committee’s now 38 recommendations to focus on GBV: Ramona Vijeyarasa, ‘CEDAW’s General Recommendation No 35: A Quarter of a Century of Evolutionary Approaches to Violence against Women’ (2020) 19 Journal of Human Rights 153. In this update to its much earlier 1992 recommendation, technology has a place in the Committee’s analysis. GBV ‘ …manifests in a continuum of multiple, interrelated and recurring forms, in a range of settings, from private to public, including technology-mediated settings and in the contemporary globalized world it transcends national boundaries’: CEDAW Committee General Recommendation No 35 para 6.

74 González Álvarez, López Ossorio and Muñoz Rivas (n 24) 89.

75 Marta Pinedo, ‘Matemáticas e inteligencia artificial contra el maltrato machista’ EL PAÍS (2 September 2021) <https://elpais.com/sociedad/2021-09-02/matematicas-e-inteligencia-artificial-contra-el-maltrato-machista.html>.

76 José Manuel Muñoz Vicente and Juan José López-Ossorio, ‘Valoración psicológica del riesgo de violencia: alcance y limitaciones para su uso en el contexto forense’ (2016) 26 Anuario de Psicologia Juridica 130.

77 Ángel González-Prieto and others, ‘Machine Learning for Risk Assessment in Gender-Based Crime’ [2021] arXiv:2106.11847 [cs, stat] <http://arxiv.org/abs/2106.11847>.

78 Pinedo (n 75).

79 Andrés Boix Palop, ‘Los algoritmos son reglamentos: La necesidad de extender las garantías propias de las normas reglamentarias a los programas empleados por la administración para la adopción de decisiones’ 264 <https://repositorio.uam.es/handle/10486/692210>.

80 Karen Hao, ‘A Horrifying New AI App Swaps Women into Porn Videos with a Click’ (MIT Technology Review, 13 September 2021) <www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/>.

81 Ibid.

82 A brief explanation of the technology and its regulatory relevance can be found in Tyrone Kirchengast, ‘Deepfakes and Image Manipulation: Criminalisation and Control’ (2020) 29 Information & Communications Technology Law 308.

83 Andrew Ray, ‘Disinformation, Deepfakes and Democracies: The Need for Legislative Reform’ (2021) 44 UNSW Law Journal 983.

84 Kari Paul, ‘California Makes “Deepfake” Videos Illegal, but Law May Be Hard to Enforce’ The Guardian (7 October 2019) <www.theguardian.com/us-news/2019/oct/07/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce>.

85 Henry Ajder and others, ‘The State of Deepfakes: Landscape, Threats, and Impact’ (Deeptrace 2019) 1 <https://regmedia.co.uk/2019/10/08/deepfake_report.pdf>.

86 Ibid. 2; Travis L Wagner and Ashley Blewer, ‘“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video’ (2019) 3 Open Information Science 32.

87 See a similar reasoning in Amrita Khalid, ‘Deepfake Videos Are a Far, Far Bigger Problem for Women’ (Quartz, 9 October 2019) <https://qz.com/1723476/deepfake-videos-feature-mostly-porn-according-to-new-study-from-deeptrace-labs/>.

88 Crimes Act 1900 No 40 – NSW s 91N.

89 Noelle Martin, Online Predators Spread Fake Porn of Me. Here’s How I Fought Back (Ted Talk, 2017) <www.ted.com/talks/noelle_martin_online_predators_spread_fake_porn_of_me_here_s_how_i_fought_back>.

90 Kirchengast (n 82). As illustrated by wider Australian law which may still be behind the times: John Davidson, ‘Australian Law behind the Times on Deepfake Videos’ Australian Financial Review (8 July 2019) <www.afr.com/technology/australian-law-behind-the-times-on-deepfake-videos-20190703-p523pi>.

91 CEDAW Committee, General Recommendation No 19, Violence against women (Eleventh Session, 1992) 1992, para 12.

92 Tyrone Kirchengast and Thomas Crofts, ‘The Legal and Policy Contexts of “Revenge Porn” Criminalisation: The Need for Multiple Approaches’ (2019) 19 Oxford University Commonwealth Law Journal 1, 3–4.

93 Ramona Vijeyarasa, ‘In Pursuit of Gender-Responsive Legislation: Transforming Women’s Lives through the Law’ in Ramona Vijeyarasa (ed), International Women’s Rights Law and Gender Equality: Making the Law Work for Women (Routledge/Taylor and Francis 2021) 9.

94 Patricia J Williams, The Alchemy of Race and Rights (Harvard University Press 1991) 146–48.

95 Ni Loideain, Adams and Clifford (n 62).

96 Raphaële Xenidis and Linda Senden, ‘EU Non-Discrimination Law in the Era of Artificial Intelligence: Mapping the Challenges of Algorithmic Discrimination’ (Social Science Research Network 2019) SSRN Scholarly Paper ID 3529524 <https://papers.ssrn.com/abstract=3529524>.

97 José Luis González and María José Garrido, ‘Satisfacción de las víctimas de violencia de género con la actuación policial en España. Validación del Sistema VioGen’ (2015) 25 Anuario de Psicología Jurídica 29, 30.

98 See Tony Krone, ‘International Police Operations against Online Child Pornography’ (2005) 296 Trends and Issues in Crime and Criminal Justice/Australian Institute of Criminology <https://search.informit.org/doi/abs/10.3316/agispt.20053519>; or political, for example OSCE Parliamentary Assembly, ‘Brussels Declaration – Resolution on Combating Trafficking and Exploitation of Children in Pornography’ (2006) <www.legislationline.org/documents/id/8534>.

99 Council of Europe, ‘The CAHAI Held Its 6th and Final Plenary Meeting’ (Artificial Intelligence, December 2021) <www.coe.int/en/web/artificial-intelligence/newsroom/-/asset_publisher/csARLoSVrbAH/content/outcome-of-cahai-s-6th-plenary-meeting>.

100 David Leslie and others, ‘Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A Proposal’ [2022] arXiv:2202.02776 [cs] <http://arxiv.org/abs/2202.02776>.

101 Joseph L Bower and Clayton M Christensen, ‘Disruptive Technologies: Catching the Wave’ (1995) 73 Harvard Business Review 43.

102 MJ Copps, ‘Disruptive Technology … Disruptive Regulation’ (2005) 2005 Michigan State Law Review 309.

103 Dmitrii Trubnikov, ‘Analysing the Impact of Regulation on Disruptive Innovations: The Case of Wireless Technology’ (2017) 17 Journal of Industry, Competition and Trade 399.

104 Daniel Gervais, ‘The Regulation of Inchoate Technologies’ (2010) 47 Houston Law Review 665, 682–84.

105 Gonenc Gurkaynak, Ilay Yilmaz and Gunes Haksever, ‘Stifling Artificial Intelligence: Human Perils’ (2016) 32 Computer Law & Security Review 749.

106 Ramona Vijeyarasa and Mark Liu, ‘Fast Fashion for 2030: Using the Pattern of the Sustainable Development Goals (SDGs) to Cut a More Gender-Just Fashion Sector’ [2022] Business and Human Rights Journal 45. <https://doi.org/10.1017/bhj.2021.29>.

107 Geordan Graetz and Daniel M Franks, ‘Incorporating Human Rights into the Corporate Domain: Due Diligence, Impact Assessment and Integrated Risk Management’ (2013) 31 Impact Assessment and Project Appraisal 97.

108 Maddalena Neglia, ‘The UNGPs – Five Years on: From Consensus to Divergence in Public Regulation on Business and Human Rights’ (2016) 34 Netherlands Quarterly of Human Rights 289.

109 Claire Methven O’Brien, ‘Transcending the Binary: Linking Hard and Soft Law Through a UNGPS-Based Framework Convention’ (2020) 114 American Journal of International Law 186.

110 For the limitations of this approach see Omri Rachum-Twaig, ‘Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots’ [2020] U Ill L Rev 1141.

111 Jon Henley and Robert Booth, ‘Welfare Surveillance System Violates Human Rights, Dutch Court Rules’ The Guardian (5 February 2020) <www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules>; Luke Henriques-Gomes, ‘Robodebt: Court Approves $1.8bn Settlement for Victims of Government’s “Shameful” Failure’ The Guardian (11 June 2021) <www.theguardian.com/australia-news/2021/jun/11/robodebt-court-approves-18bn-settlement-for-victims-of-governments-shameful-failure>.

112 George P Fletcher, Rethinking Criminal Law (OUP 2000) xix.

113 Glion Human Rights Dialogue (n 1) 21.

114 ‘The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems’ <www.torontodeclaration.org/declaration-text/english/>.

115 Rob Toews, ‘Here Is How The United States Should Regulate Artificial Intelligence’ (Forbes, 28 June 2020) <www.forbes.com/sites/robtoews/2020/06/28/here-is-how-the-united-states-should-regulate-artificial-intelligence/>.

116 Olivia J Erdélyi and Judy Goldsmith, ‘Regulating Artificial Intelligence: Proposal for a Global Solution’, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (ACM 2018) <https://dl.acm.org/doi/10.1145/3278721.3278731>.

117 European Commission, Proposal for a Regulation (n 12).

118 Chesterman (n 17) 3–4.

119 Toews (n 115).

120 Norms which should also be gender-responsive: Vijeyarasa (n 93).

121 See the examples in Sandra Fredman, Human Rights Transformed: Positive Rights and Positive Duties (OUP 2008) and Grégoire Webber, Paul Yowell and Richard Ekins, Legislated Rights: Securing Human Rights through Legislation (Cambridge University Press 2018).

122 European Commission, Proposal for a Regulation (n 12).

123 Eduardo Bismarck, Bill, ‘Estabelece princípios, direitos e deveres para o uso de inteligência artificial no Brasil, e dá outras providências’. 2020 [PL 21/2020].

124 Rodrigues, Siemaszko and Warso (n 3) 65.

125 Paul Nemitz, ‘Foreword: Power in Times of Artificial Intelligence’ (2020) 2 Delphi: Interdisciplinary Review of Emerging Technologies 158, 159, 8.

126 Rodrigues, Siemaszko and Warso (n 3) 63–64.

127 Christiaan Van Veen and Corinne Cath, ‘Artificial Intelligence: What’s Human Rights Got To Do With It?’ (Medium, 18 May 2018) <https://points.datasociety.net/artificial-intelligence-whats-human-rights-got-to-do-with-it-4622ec1566d5>.

Additional information

Funding

This work was partially supported by the Australian Research Council under Grant CE200100005 and a Fulbright-Schuman fellowship (Bello y Villarino) and the Women's Leadership Institute Australia Research Fellowship under Grant PRO-20-10998 (Vijeyarasa). The authors express their gratitude to all funders who enabled this research and to the anonymous reviewers.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.