537
Views
1
CrossRef citations to date
0
Altmetric
Articles

The Development of AI Ethics in Japan: Ethics-washing Society 5.0?

Received 10 Feb 2023, Accepted 23 Oct 2023, Published online: 21 Nov 2023

Abstract

This paper examines how AI ethics has been developed at the national level in Japan, and what this process reveals about broader Japanese state imaginaries of how advanced technology should be developed and used, and what a future with these technologies should look like. Key developments in the Japanese government’s approach to AI ethics and governance between 2014 and 2023 are laid out, based on an analysis of official reports and policy documents supplemented by data collected via semi-structured interviews with three expert members of the committees that formulated several key sets of ethical principles. The paper considers Japan’s positioning in the global race to develop AI ethics principles over this period, as well as the imaginary of AI within the wider historical context of imaginaries about the knowledge society in Japan. I suggest three ways in which AI ethics has been understood and instrumentalized in the Japanese context, and argue that the main methodology used to date—ELSI—complements the government’s utopian and techno-determinist imaginaries of the future while concealing a deeply conservative approach that serves to reproduce structural inequalities and discrimination despite the apparent internationalism and progressive values that are repeatedly expressed in state-promoted ethical principles.

1 Introduction

In recent years, criticism has been building about the proliferation of sets of ethical principles for artificial intelligence (AI) among technology corporations, national governments and international organizations, even as ethically problematic deployments of AI have also multiplied. Wagner (Citation2018) has critiqued practices of “ethics-washing” or selective “ethics shopping” by corporations and governments in an effort to delay regulation of the sector, while Mittelstadt (Citation2019) suggests that even seemingly universally accepted ethical principles “hide deep political and normative disagreement.” Munn goes further, arguing that AI ethical principles are “useless” because

these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. (Munn Citation2023; original emphases)

Such criticisms appeared further validated when a succession of large tech companies, despite adopting such principles, fired or sidelined their ethics teams (Grant and Weise Citation2023). However, while sets of AI ethics principles may have met with limited success so far in realizing the ethical design, development and deployment of AI systems, a focus on how such principles are developed can shed light on the processes by which ethical values are understood, interpreted, and codified, by whom and according to what logics; how ethical values relating to advanced technologies are intertwined with imaginaries about those technologies and technological futures; and the political and economic ends towards which AI ethics principles are aimed.

This paper explores how an approach to AI ethics has been developed at the national level in Japan, and what this process reveals about broader Japanese state imaginaries of how advanced technology should be developed and used, and what a future with these technologies should look like. I first lay out in chronological order significant developments in the Japanese government’s approach to AI ethics and governance between 2014 and 2023, based on an analysis of official reports and policy documents. This analysis is supplemented by data collected via semi-structured interviews conducted in 2021, each around an hour long and conducted in English, with three expert members of the committees that formulated several key sets of ethical principles. The paper considers Japan’s positioning in the global race to develop AI ethics principles over this period, as well as the socio-technical imaginary (Jasanoff and Kim Citation2009) of AI within the wider historical context of state imaginaries about the knowledge society and Society 5.0, as the latest technological agent of radical yet elusive societal transformation. In the conclusion, I suggest three ways in which AI ethics has been understood and instrumentalized, and argue that what results from the particular configuration of these approaches in Japan is, despite its apparent internationalism and progressive values, a deeply conservative and narrow view of AI ethics that lacks mechanisms for practical implementation and serves to perpetuate rather than address deep and persistent social inequalities and discrimination that seem likely to be further entrenched by AI technologies, exemplifying ethics-washing on a national scale.

2 Developments in AI Ethics in Japan 2014–2023

Official efforts to develop and codify a set of ethical principles in the field of AI in Japan began relatively early by international standards, in 2014. The impetus for this did not primarily arise from concern about how AI was being or might be used in the future, but instead was the direct result of a sexism debacle at the Japanese Society for Artificial Intelligence (JSAI). Robertson has described how the editorial board of the JSAI’s journal held a competition for members to design the front cover, and selected as the winning entry an illustration of a humanoid maid in the form of a young woman holding a broom in one hand and a book in the other. The decision to use this picture was widely criticized as blatantly sexist, as it “equates women with housework and implies that book-learning distracts working women from their tasks at hand” (Robertson Citation2018: 92), and exposed an apparent lack of concern or awareness among the members of the JSAI judging the entries about why the use of such imagery was problematic and how it would be interpreted within and beyond the community of AI researchers and professionals. Reactions on social media critiqued the decision as indicative of a lack of gender diversity, inclusiveness and ethical self-awareness within the society and its leadership, which were both overwhelmingly dominated by men (Ibid.).

Amid the public backlash, Ema Arisa, a member of JSAI and researcher in science and technology studies (STS), successfully lobbied the organization to set up an ethics committee (Matsuo et al. Citation2015; Ema Citation2018).Footnote1 This committee included representatives from JSAI as well as the director of a private consulting company, a journalist, and a science fiction novelist. The committee’s approach was rooted in ELSI (ethical, legal, and social implications of technology),Footnote2 and the committee eventually released a statement of ethical guidelines in 2017, to serve as what it called “a moral foundation for JSAI members to become better aware of their social responsibilities and encourage effective communications with society” (JSAI Citation2017a). While these guidelines had initially seemed intended to signal concrete action that would prevent a repeat of the scandal that had led to their development, the final document made no mention of sexism or gender discrimination. In a blog post announcing the publication of the ethical guidelines, the chair of the committee expressed its aim as “exploring the relationship between artificial intelligence research/technology and society, and striving to effectively communicate it to the society” (JSAI Citation2017b). He noted that the ethical guidelines produced were “expressing what is obvious, that the JSAI conducts research activities for the benefit of society,” and quoted another member of the committee who stated that the guidelines were intended to reassure the public “that researchers are aiming to create a better society, and that they are not mad scientists” (Ibid.). These comments appeared to frame the guidelines as a response to an unfortunate public misunderstanding of the work of AI developers—a problem of communication rather than legitimate concerns about the views of members of Japan’s leading professional association of AI practitioners who appeared unaware of any problem with foregrounding sexist imagery to represent their work. The blog post went on to reassure the members of JSAI that the guidelines would have no impact on their research and development activities:

the Ethical Guidelines are not intended to come into practice immediately … [T]here is no pressing plan to proceed with the creation of a system to check whether a submitted manuscript aligns with the proposed Ethical Guidelines, or whether a specific artificial intelligence research conforms to these guidelines. By issuing the Ethical Guidelines, we hope to reach a consensus and promote deeper discussions of these ethical guidelines in the research and development of artificially intelligent technologies, with the general public and among the JSAI members. If the majority of the public and the JSAI members wish to see some sort of a practical process put in place, the Ethics Committee intends to further review and iterate these Ethical Guidelines through extensive dialogue. (Ibid.)

The values listed in the guidelines included contribution to humanity, abidance by laws and regulations, respect for privacy, fairness, security, acting with integrity, accountability and social responsibility, and communication with society and self-development. A unique aspect of this code compared with those created subsequently by other organizations in Japan and elsewhere was the final value, which stated that AI itself should abide by the guidelines “in order to become a member or a quasi-member of society” (JSAI Citation2017a). As Gal notes, this value followed the spirit of Isaac Asimov’s laws of robotics and, in turn, the laws of robotics from Tezuka Osamu’s hugely popular manga series Astro Boy (Tetsuwan Atomu), which Asimov’s work had influenced (Gal Citation2020: 619)—indeed, both Asimov and Astro Boy were explicitly namechecked in the public statement on the release of the guidelines (JSAI Citation2017b). It also illustrates how science-fiction imaginaries of AI and robots are often employed in Japan as a way to engage the public while emphasizing the positive benefits rather than the risks of such technologies (Ito Citation2007); it is notable that one of the nine members of the committee was a science fiction author, Fujii Taiyo.Footnote3 At the same time, this value implied that any distinction between humans and AI systems as ethical subjects was porous and would be overcome in the future; this idea of porosity, expressing an underlying Japanese perspective of technoanimism, is often emphasized by AI researchers as a key cultural difference between Japan and “the West” in perceptions of AI and robots (see, for example, Gal Citation2020; Sakura Citation2022; contra, see Frumer Citation2018; Gygi Citation2018).

In April 2016, then Prime Minister Shinzo Abe established the AI Technology Strategy Council and the next month, the Advisory Board on AI and Human Society was set up to examine potential social and ethical issues related to future AI systems, also using an ELSI approach and releasing a report the following year (Advisory Board on AI and Human Society Citation2017). In October 2016, the Ministry of Internal Affairs and Communications (MIC) convened the “Conference toward AI Network Society,” with a similar remit, and which also drew on the JSAI’s ethics committee to help set the agenda and involved several of the same experts.Footnote4 The Conference group published its “Draft AI Research and Development Guidelines for international discussions” in 2017 (hereafter “AI R&D Guidelines”), and its “Draft AI Utilization Principles” in 2018 (subsequently revised and published as the “AI Utilization Guidelines” in 2019).

The AI R&D Guidelines document argues that: “With proactive development and utilization of AI, Japan can solve various problems arising from challenges that it is confronted with (such as a declining birthrate and aging population),” while making “significant contributions to the international community” (Conference toward AI Network Society Citation2017: 2). As in the framing of the JSAI guideline, the introduction to the AI R&D Guidelines strongly emphasizes its provisional nature, stating that given the ongoing development of AI technologies, “it is NOT appropriate to treat international AI R&D Principles and AI R&D Guidelines … as aimed for the introduction of regulations. Rather, this draft is drawn up as a proposal of guidelines that will be internationally shared as non-regulatory and non-binding soft law” (Conference toward AI Network Society Citation2017: 2; original capitalization). It further argues that sector-specific discussions should be held separately to decide guidelines appropriate for each field, a point repeated in the AI Utilization Guidelines.

The AI R&D Guidelines document lays out five overarching “basic philosophies”: to achieve a human-centered society where all humans can live their lives “in harmony with AI networks, while human dignity and individual autonomy are respected”; to share guidelines and best practices internationally among stakeholders; to ensure an appropriate balance between the benefits and risks of AI networks; to ensure AI research and development is not hindered so as to safeguard “technological neutrality” and avoid imposing excessive burdens on developers; and to review the guidelines regularly and revise them flexibly as necessary. Nine specific “principles” of AI research and development are then outlined: collaboration (including interconnectivity and interoperability); transparency (including explainability of AI decision-making); controllability; safety; security; privacy; ethics (including respect for human dignity and individual autonomy, the avoidance of unfair discrimination, and ensuring AI systems “do not unduly infringe the value of humanity, based on the International Human Rights Law and the International Humanitarian Law”); user acceptance (including providing ease of use, accessibility, and opportunities for choice); and accountability.

The AI Utilization Guidelines are aimed at developers, users (including, somewhat confusingly, AI service providers, as well as their business and retail customers), and data providers. As these guidelines cover developers as well as end users, there is considerable overlap with the AI R&D Guidelines. This time, seven basic philosophies are laid out: achieving a human-centered society where human dignity and individual autonomy are respected; respecting and including the diversity of AI users; achieving a sustainable society through AI; balancing benefits and risks of AI; appropriately assigning roles among stakeholders based on their abilities and knowledge; sharing the guidelines as non-binding soft law and best practices; and regularly reviewing and flexibly revising the guidelines. Ten principles are listed, together with voluntary measures that could be taken to realize each principle: proper utilization (including the possibility of human intervention where appropriate); data quality; collaboration; safety; security; privacy; human dignity and individual autonomy; fairness; transparency; and accountability. Unlike the AI R&D Guidelines, this document does not specifically mention human rights.

These two sets of principles feature some overlap with the JSAI’s principles (including privacy, security, fairness, and accountability) but dispense with the idea that AI itself should follow the principles, and add new principles of transparency, collaboration, and safety, among others. These documents were—somewhat unusually—published in English, reflecting the government’s aim of driving discussion at international fora, and indeed, Japanese delegates presented the guidelines at venues including meetings of the OECD, G7, and G20.

An interviewee who was a member of the Conference toward AI Network Society explained that these principles were developed by combining and integrating existing principles from international sources including the Institute of Electrical and Electronics Engineers (IEEE) and the Future of Life Institute, both based in the United States. Another interviewee who helped draft the principles said that it was challenging to find consensus between computer scientists in the group who argued that humans would not be able to control AI in the future, and legal scholars who said they must: “So we tried to take an abstract and soft law approach [so that] stakeholders and experts from various fields could agree … We refrained from writing principles in a manner like legal code or hard law.”

By this time, the key role that AI would play in the government’s plans for the future of Japan’s economy and society was becoming increasingly evident, as it was in many other countries. The Cabinet Office’s 2016 Fifth Science and Technology Basic Plan, which set out the government’s agenda for science, technology, and innovation (STI) for the period from 2016–2021, introduced and was premised on the concept of “Society 5.0,” which was presented as “an ideal form of our future society” (Government of Japan Citation2016: 11), “[a] human-centered society that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space.”Footnote5 Society 5.0 supposedly represented a new paradigm that followed four previous “versions” of society (hunter-gatherer, agricultural, industrial, and information societies), and was further described in the Fifth Basic Plan as:

a world-leading super smart society … a society that is capable of providing the necessary goods and services to the people who need them at the required time and in just the right amount; a society that is able to respond precisely to a wide variety of social needs; a society in which all kinds of people can readily obtain high-quality services, overcome differences of age, gender, region, and language, and live vigorous and comfortable lives. (Government of Japan Citation2016: 13)

Society 5.0 has been adopted as the main organizing concept and imaginary for Japan’s future in almost all subsequent major STI policy documents, including the Sixth Science and Technology Basic Plan (2021–2026).

The Fifth Basic Plan presented AI as an essential pillar of the development of Society 5.0, together with the internet of things, robotics, and other digital technologies and data infrastructure as well as related educational programs. It also explicitly stated the need to undertake research on ELSI in relation to these technologies—a reflection of ELSI’s ongoing dominance in policy discourse and research practice in Japan as the main paradigm and framework for doing ethics as an adjunct to technoscientific research and development projects (Mikami et al. Citation2021). Continuing the emphasis from the JSAI ethical guidelines on an additive post-hoc ethics that should react to the development of technologies, the Plan stated that: “As and when necessary in response to advances in leading-edge research, the government and academic societies should also work on formulating ethical guidelines” (Government of Japan Citation2016: 62; my emphasis). The Fifth Basic Plan is further notable for its theme of the need for Japan to extend its global engagement and international leadership, actively contributing to the shaping of the ecosystem within which the future global economy, including transnational technologies such as AI, would operate:

As nations become more and more interdependent, Japan must actively contribute to a global framework for handling such issues … Amidst the great ongoing changes in the international environment, Japan needs to raise its international profile by utilizing its ability in STI and by demonstrating leadership in the pursuit of common interests shared by Japan and the world. Given all this, it is important in the promotion of STI policy for Japan to always take the global perspective into consideration and act strategically in its international collaboration. (Government of Japan Citation2016: 4–9)

Reflecting AI’s growing importance to national and international STI strategy, in 2018, discussions about AI ethics moved from ministry-level initiatives to the highest bureaucratic tier, as the Cabinet Office set up the “Council for Social Principles of Human-centric AI.” This body was closely related to the Conference toward AI Network Society, and again featured an overlapping membership. In March 2019, following a public consultation, the Council published its “Social Principles of Human-Centric AI,” which essentially consolidated the AI R&D Guidelines and AI Utilization Guidelines into a single document. Yet the tone of the document is strikingly utopian, reaffirming the goal of a Society 5.0 in which “national borders, industries, academia, governments, race, gender, nationality, age, political convictions and religion” are transcended, in order to achieve “complete internationalization” (tetteitekina kokusaika). It also describes a technology-led approach in which societal development is explicitly subordinated to technological innovation. In order to achieve Society 5.0, the document notes the wide-ranging social changes required first to realize an “AI-Ready Society,” described as one that “as a whole has undergone the necessary changes to maximize the benefits of AI, enjoys the benefits of AI, or has introduced AI immediately when needed and is in a state of being able to receive the benefits. It means ‘a society adapted to the use of AI’” (Council for Social Principles of Human-Centric AI Citation2019: 3). These changes were to encompass human potential (including education and training), social systems (which must flexibly respond to the development of AI), industrial structures (which must similarly be flexible vis-à-vis labor and business practices in response to AI developments), innovation systems, and governance. The document calls for international consensus on these issues, with Japan taking “a leadership role in international discussions with the goal of establishing an AI-Ready Society worldwide,” in other words helping bring into existence a new international community organized around the future rule-based development of AI.

The document lays out three basic philosophies: the values of dignity, diversity and inclusion, and sustainability. The social principles of human-centric AI themselves are then listed: human-centricity, including protection of fundamental human rights guaranteed by the Japanese Constitution and “international standards”; education and literacy; privacy protection; security; fair competition without domination by specific countries or companies or leading to concentrations of wealth or power biased towards certain stakeholders; fairness, accountability, and transparency in AI decision making; and innovation.

An interviewee who served as a member of the Council for Social Principles of Human-Centric AI explained that, again, these principles were created through a process of discussing, comparing and synthesizing existing international sets of principles and guidelines. Indeed, minutes of the Council’s meetings indicate close comparison of the wording of the proposed principles against that of counterparts such as the European Commission’s “Ethics Guidelines for Trustworthy AI,” as well as discussions about how to navigate geopolitically sensitive topics.Footnote6 The interviewee framed the creation of the Social Principles as a balancing act in order to maintain their acceptability in particular to both the US and China. When I asked whether the Social Principles reflected any concerns specific to Japan, the interviewee noted that there was debate about whether to include a “uniquely Japanese” principle in line with Article Nine of the Japanese Constitution, which renounces war as a sovereign right of the nation as well as the threat or use of force to solve international disputes. The proposed principle would have explicitly prohibited the military development or use of AI, for example in autonomous weapon systems. According to the interviewee’s account:

the other members [of the Council for Social Principles of Human-Centric AI], they mostly kind of hesitated—you know, it’s kind of a political issue, I totally understand that it’s really political and it’s really scary to write that kind of thing in an official Japanese document … so the Japanese position is kind of like a balancing act between this political or geopolitical dimension.

This proposed principle was dropped in order to ensure that it would be possible to reach consensus on the set of principles at the international level. The interviewee concluded that the principles ultimately published did not reflect any specifically “Japanese” values.Footnote7 A second interviewee who was a member of the Conference toward AI Network Society argued that the very idea of an “AI network society” that envisaged various kinds of AI systems and humans interconnected via a network was itself inherently based on a Japanese mode of thought that lacked a “Western” differentiation between subject and object, and saw humans and AI as interdependent and co-related. However, the third interviewee argued that the idea of an “AI network society” had less to do with any philosophical or cultural grounding than with the “political struggle between three ministries” over who should “own” AI, with MIC staking its claim to this emerging field by stressing the network element of AI, rather than its status as an industry (which would put it under the remit of the Ministry of the Economy, Trade and Industry, or METI) or as a technology (which would put it under the Ministry of Education, Culture, Sports, Science and Technology). They explained that this was why MIC introduced the term “AI network society” in the first place instead of simply using the term “AI.”

In June 2019, the Japanese government published its national AI Strategy, subtitled “AI For Everyone: People, Industries, Regions and Governments.” The introduction stated that competition between the US and China had “left Japan in a position of having fallen behind” in AI and therefore in need of the significant further measures presented in the document to catch up. The AI Strategy restated the ambition to realize Society 5.0 and, among other provisions, adopted the Social Principles of Human-Centric AI. It also listed four strategic objectives: developing human resources to lead “the world in being aligned with the needs of the AI era”; becoming a frontrunner in the real-world application of AI to industry; realizing a “sustainable society that incorporates diversity,” particularly highlighting women, older people, and foreigners; and assuming a leadership role in building research, education, and social infrastructure networks related to AI. The strategy further focused on the need for a unified effort between public and private sectors, with the government ensuring “immediate removal of institutional and policy obstacles” and the private sector complying with the Social Principles of Human-Centric AI—a bargain aimed at encouraging the development of the sector with responsible self-regulation.

The document set out a breakdown of the four strategic objectives and the concrete measures to be taken by different ministries to achieve them. Among these was a section on AI ethics, which incorporated the Social Principles and the key notion of human-centricity, and called for ongoing discussions among international groups including the European Union, OECD, G7, and UNESCO. The priority for Japan to assert an international leadership role was emphasized throughout the document, as “it has now become difficult for Japan to conduct a domestic-only agenda for various AI related technology R&D” (Government of Japan Citation2019: 24). Anzai Yuichiro, chair of the government’s Advisory Council for the AI Strategy Implementation Council, has framed ELSI as a key initiative for achieving the four strategic objectives (Anzai Citation2021), and a 2022 update to the Strategy in response to the COVID-19 pandemic explicitly labels the section on AI ethics as “ELSI” (Government of Japan Citation2022).

All three interviewees argued that the AI R&D Guidelines, AI Utilization Guidelines, and Social Principles documents had helped set the international agenda, a view reflected in the final version of the AI Utilization Guidelines: “Japan has led international discussions on the principles of AI. As a result, international consensus has been built on the concept of the principles” (Conference toward AI Network Society Citation2019: 3). They stated that these sets of guidelines had in particular influenced the OECD Principles on AI, which were adopted in May 2019 and taken as the model for the AI principles endorsed by the G20 in June 2019. The OECD Principles on AI were adopted by the US-led Global Partnership on AI (GPAI) in June 2020, while the G20 AI Principles have been agreed by more than 50 national governments including both the US and China. In 2021, Japan adopted UNESCO’s human-rights-based Recommendation on the Ethics of AI, which includes further values and principles broadly aligned with the earlier OECD Principles. Japan has also sent a delegation to the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI), where it has advocated for a non-binding soft law approach with sectoral regulation where necessary in order to avoid stifling innovation (Iida Citation2021).

Individual government ministries have also produced their own guidelines relating to AI as part of the government’s directive to prepare sector-specific governance and regulatory approaches. In 2018, METI released its “Contract Guidelines for the Use of AI and Data,” and has since published a string of reports in English on AI governance and solicited international public comments on these documents. A 2019 report entitled “Governance Innovation: Redesigning Law and Architecture for Society 5.0” argued that governance rules should not be set in advance via rule-based regulations, but that instead “goal-based regulation” should be pursued with incentives and enhanced accountability to ensure respect for human rights, fairness, and safety. In 2021, METI published an interim report entitled “AI Governance in Japan Ver. 1.0” and “Governance Innovation Ver. 2: A Guide to Designing and Implementing Agile Governance”—both follow-ups to the 2019 report which focused on reimagining governance for the cyber-physical systems expected to be part of Society 5.0. “Governance Innovation Ver. 2” recommended agile governance without predefined fixed rules which could react to rapid technological and societal changes, and which aimed at achieving goals such as privacy, freedom of movement, and fair competition, rather than being process-led and regulated procedurally.

The Ministry of Health, Labor, and Welfare (MHLW)’s advisory board for promoting AI utilization in the healthcare field produced a 2017 report stating that AI was a vital technology for meeting the challenges of an aging population and fulfilling the ministry’s Health Care 2035 proposal, and making a brief mention of the need to advance discussions on ELSI to promote the use of AI while preventing its potential negative impacts (MHLW Citation2017: 30). MHLW subsequently funded a research group on ELSI in relation to medical AI led by Inoue Yusuke, a professor of medical sciences at Tokyo University, which published several reports between 2019 and 2021 (Inoue et al. Citation2019, Citation2020, Citation2021). The Japan Medical Association’s Council for the Promotion of Science produced a 2018 report (JMA Citation2018) that similarly took an ELSI approach to the challenges of AI in medicine.

Amid renewed hype surrounding the release of generative AI tools such as OpenAI’s GPT series of large language models, in 2023 Japan hosted the G7 Summit, which led to the announcement of the “Hiroshima AI process” aimed at coordinating discussions about generative AI with the OECD and GPAI to cover topics such as its governance and responsible utilization (G7 Citation2023). In the same year, Japan’s governing Liberal Democratic Party released a white paper focusing on generative AI and calling for a new national AI strategy that among other measures would “reconsider the regulatory gap” fast developing between Japan and the US, China and the EU. The report advocated for a regulatory approach closely aligned to those developed by these countries, while accelerating deregulation across individual sectors to keep pace with progress in AI technology and enhance Japan’s competitiveness (LDP Citation2023).

3 Discussion

As can be seen in the preceding section, the approach taken by Japan’s national government to developing an AI ethics and governance ecosystem and contributing to international discussions and consensus-building in this field between 2014 and 2023 has combined universalistic principles involving little concrete detail or definition of key terms and no enforcement mechanisms, with the promise of more detailed sectoral guidelines in the future. Kozuka (Citation2019) notes that this approach of “state-induced self-regulation” reflects Japan’s approach to various other areas of science and technology governance, although it is worth noting that this may change in the coming years if and when the government decides to address the “regulatory gap” mentioned above by seeking, for example, some form of adequacy arrangement with the EU’s upcoming AI Act, as it previously did with the General Data Protection Regulation (GDPR) (Wang Citation2020). In this final section I aim to draw out three different strands of thinking around AI ethics in Japan, based on the approach developed to date and within the context of previous imaginaries of science and technology in Japan.

Firstly, we can clearly see the continuing dominance of ELSI in state approaches to AI ethics, which has implications for how the role of AI ethics is understood. As Mikami et al. (Citation2021) note, ELSI has become firmly embedded in Japan’s policy framework following the first ELSI program in the Human Genome Project in the United States from 1990, and particularly since the 1999 UNESCO Budapest Declaration, with the first Japanese government policy document mentioning ELSI in 2004. They argue that “ELSI became framed in the country’s policy framework as ‘problems’ that ought to be resolved if society is to benefit from advancements in science and technology” (Citation2021, 89), with sections devoted to ELSI in the Third and Fourth Science and Technology Basic Plans, and a similar call in the Fifth Plan for interdisciplinary research to address ELSI in the social implementation of AI. This approach is still widely used by the government despite the decline of ELSI in Europe and North America and the growing popularity of alternative “post-ELSI” approaches (Balmer et al. Citation2015), notably responsible innovation (RI) or responsible research and innovation (RRI) in the EU, the United Kingdom and beyond since the adoption of RRI by the European Commission research agenda in 2010 (Ryan and Blok Citation2023).Footnote8 While interpretations and practices of ELSI differ, Mikami and colleagues note the drawbacks of this approach as deployed in the context of Japan, particularly the implication that consideration of ethical, legal and social implications should happen downstream of, additional to, and separate from the work of technoscientific research and development, accompanied by built-in assumptions about the distribution of roles, responsibilities, and power among those involved in such projects—in other words, the subordination of ethics and ethicists to technology and technologists in innovation projects.Footnote9 They also note that despite the recent inclusion of the social sciences and humanities in Japan’s STI policy framework in 2021, there remains a predominant view that their function is, to quote the Council of Science, Technology and Innovation, “to smoothly promote practical use of R&D results in society” including by establishing a research structure that enables them to “respond to ethics, legal systems, and social issues” (Mikami et al. Citation2021: 90; quoting CSTI Citation2020: 3). Mikami and colleagues build on previous critiques to argue that one danger of an ELSI approach is that STS and the social sciences more widely are effectively being co-opted to serve the government’s STI agenda based on the government’s deficit model of public understanding of science rather than by contributing to it through critical engagement. We can certainly see this deficit model in the JSAI’s framing of its ethical guidelines as serving a communicative function to overcome the public’s potential misunderstanding of AI professionals, rather than address the substantive problems of gender discrimination and lack of inclusiveness and diversity in the sector.

Secondly, and closely related, is the idea of AI ethics serving as a (geo)political tool for enhancing reputation, influence, and international competitiveness. The Japanese government has made a significant political investment in the development of ethical principles to steer future applications of the technology as part of its broader AI strategy. This effort seems motivated by a desire to lead and to be seen to lead on the world stage, especially by a domestic audience, and particularly ahead of regional competitors such as China and South Korea. In this sense, the actual content of the principles has seemed to matter less than the early contributory role that Japan played within an emerging alignment of states committed to an international rules-based order for AI by developing sets of ethical principles drawn from Euro-American sources that would be widely internationally acceptable. This is an approach that appears to envisage a tight relationship between leadership in setting the ethical principles and governance rules for a technology on the international stage; leadership in setting international technological standards informed by those principles; and leadership in the development and actual commercialization of that technology (Wright Citation2023: 53)—although this is a set of connections that remains somewhat elusive in the case of AI, particularly when it comes to commercialization, where Japan has lagged behind international competitors (Government of Japan Citation2019). Despite this approach to ethics being in one sense outwardly focused and aimed at international consensus-building, with key documents published in English and principles drawn almost exclusively from Euro-American ethical discourses, the intended results in terms of perceived success in global leadership (rather than substantive changes in industry practices) are perhaps primarily targeted towards a domestic audience. Underlying this drive seems to be the fear or anxiety among government bureaucrats of being left behind or outmaneuvred by the US, China, or regional rivals, particularly in terms of international standards and trade (White Citation2022; Wright Citation2023; see, for example, Kishi Citation2011). Similarly, in-fighting between ministries over the “ownership” of AI may also help explain why the presentation of successive sets of ethical principles at international fora would be touted as a triumph in itself—again, not so much as a concrete path to reshaping industry practices than as a way to claim territory in domestic politics (cf. Morris-Suzuki Citation1994: 217).Footnote10

Thirdly, there is an aspect of Japan’s approach to AI ethics that is closely aligned with science-fiction tropes and imaginaries. We find hints of this in the future-oriented JSAI ethical guidelines, with their allusions to Asimov and friendly robot characters such as Astro Boy and Doraemon, and the suggestion that future super-intelligent, autonomous AI agents should themselves adhere to the guidelines. We can also observe this in the government’s vision of Society 5.0 (and, previously, Innovation 25; see Robertson Citation2018): a society that does not yet exist but is imagined as a “super-smart” global utopia in which all barriers between people will have been removed thanks to futuristic technologies including AI. This approach has the benefit of not impeding research and development practices today because guidelines are only expected to be required for more advanced technologies in the future; in focusing on this sci-fi imaginary, present day harms such as gender discrimination in the AI sector and beyond can be overlooked while engineers and developers are given carte blanche to build a brilliant future.Footnote11 As the JSAI guidelines were envisaged, this approach to ethics, consonant with the government’s view of ELSI, can be seen primarily as a method of communication to reassure, counter criticism, and smooth the way for public acceptance of AI technologies, rather than as a fulcrum for change.

The vision of Society 5.0 is central to these different ways of conceptualizing AI ethics. In many ways, the discourse about Society 5.0 is a direct continuation—even a rebranding—of techno-utopian discourses surrounding the information society (jōhō shakai; in other words, Society 4.0) since the end of the 1960s. Both are based on a technological determinism that assumes that society evolves in response to new digital technologies, as evidenced, for example, in the Social Principles document that claims that society must be adapted in order to be “ready” for AI rather than vice versa. In her 1988 book Beyond Computopia, Morris-Suzuki examined and deconstructed Japanese government and industry claims about the information society: that the implementation of information technologies and automation would lead to less manual work and shorter working hours, a transformation of human values and “blossoming of intellectual creativity,” and the solution of social problems, leading to a harmonious, cohesive and classless society “based on new ethical standards” (Citation1988: 15–16, quoting JACUDI Citation1972).Footnote12 She argued instead that automation was more likely to result in further economic precaritisation, deskilling, corporate and government surveillance and control, and social atomisation—predictions that appear remarkably prescient 35 years on (see, for example, Allison Citation2013; Zuboff Citation2019; Casilli Citation2023; McQuillan Citation2023; Wright Citation2023).Footnote13 Many of the promises of Society 5.0 are identical to or extend those of the information society in their scope and internationalism: imagining overcoming all barriers between people worldwide and creating a society in which “humans live in harmony with AI networks, and data/information/knowledge are freely and safely created, distributed, and linked to form a wisdom network” (Conference toward AI Network Society Citation2017: 3–4; original emphasis). The rhetoric surrounding Society 5.0 represents both a continuation and further inflation of a bubble economy of techno-hype even as the fulfillment of those futuristic visions recedes further into the distance.

This is not to suggest that Japan is unique in its approach to AI ethics, which reflects not only (by design) “Western” ethics discourses but also global trends including the commercialization of science (Mirowski Citation2011), strategic framings of AI by governments in terms of disruption and concern over national competitiveness (Bareis and Katzenbach Citation2022), and a reluctance by governments to regulate the technology for fear of stifling innovation, combined with widespread ethics washing (Wagner Citation2018). Yet particularly notable in the case of Japan are its retention of the ELSI approach and, relatedly, the yawning gap between the increasingly techno-utopian promises of “imagineers” (Robertson Citation2018) in government and industry and the reality of research and development practices and their results. This gap is evident in initiatives such as the government’s flagship ¥100bn (US$700 m) Moonshot Research and Development Program, which was launched in 2019. First among the many futuristic goals of the program is Moonshot Goal 1: the “realization of a society in which human beings can be free from limitations of body, brain, space, and time by 2050”Footnote14 via the development of “cybernetic avatars,” echoing the description of Society 5.0 as one in which “national borders, industries, academia, governments, race, gender, nationality, age, political convictions and religion” will be overcome through technology. Yet all of the nine Moonshot program managers are Japanese men, belying any suggestion of gender equality, inclusiveness, or indeed the supposed internationalism of research and development that has increasingly been touted as a key government aim for science—values that have all been enshrined in the many sets of AI ethics principles described above and formally adopted by the Japanese government. Likewise, as Ema notes, “only 13.8 percent of the members that created the ‘Social Principles of Human-Centric AI’ in Japan were women” (Ema Citation2019),Footnote15 despite the Social Principles themselves calling for an AI society where barriers of gender would be entirely transcended: a stark illustration of ethics-washing that approaches a kind of gas lighting on a national scale.

Bringing these threads together, the Japanese government’s increasingly fantastical future visions of technology and society, and its ELSI and principles-based approach to AI and STI more broadly, can be viewed as mutually complementary. A techno-determinist imaginary of Society 5.0 in turn requires a reactive ELSI approach to mitigate or smooth the way for what are seen as the inevitable outcomes of advanced AI. At the same time, assuming that no harms are being perpetrated at present ignores and risks entrenching and indeed exacerbating (McQuillan Citation2023) the structural inequalities of current societal configurations and vulnerabilities to AI harms in Japan, continuing the administrative blindness to the very problems that the sets of ethical principles described above are nominally intended to address. Ethical risks are instead projected into the future as the results of AI technologies and applications that have not yet been developed—and so too are the realization of the progressive ethical principles that have been developed and promoted by the government, ironically serving to distance and forestall any need for critically and politically informed transformative change to research and development aims, practices, institutions, or regulations.

Acknowledgement

I would like to express my gratitude to the three expert interviewees quoted in this paper, who very kindly shared their experiences and views with me. Many thanks also to the anonymous reviewers for their helpful and constructive comments. The research on which this paper is based was conducted during my time on the PATH-AI project (www.path-ai.org), and I would like to thank the other members of the PATH-AI team with whom I had many fruitful discussions: Hiroshi Nakagawa, Takehiro Ohya, Satoshi Narihara, Osamu Sakura, David Leslie, Charles Raab, Fumi Kitagawa, Florian Ostmann, and Morgan Briggs.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Economic and Social Research Council under Grant ES/T007354/1 (Principal Investigator: Dr David Leslie) and the Japan Science and Technology Agency.

Notes on contributors

James Wright

Dr James Wright is a Visiting Lecturer at Queen Mary University of London. He received his Ph.D. in anthropology and science and technology studies from the University of Hong Kong in 2018. His work focuses on the ethics and governance of AI, and his research interests also include the development and use of robots, AI, and other digital technologies for eldercare. His first book, entitled Robots Won't Save Japan: An Ethnography of Eldercare Automation, was published in 2023 by Cornell University Press.

Notes

1 In the same year that the JSAI ethics board was founded, the Acceptable Intelligence with Responsibility Study Group (AIR) was established by co-founders Ema Arisa and fellow JSAI ethics committee member Hattori Hiromitsu, also in response to the JSAI sexism controversy (AIR Citation2018; Kim Citation2020). AIR released a report entitled “Perspectives on Artificial Intelligence/Robotics and Work/Employment” in 2018 as part of a project providing the National Diet (the Japanese legislature) with research and information services.

2 This emphasis is reflected in the web address of JSAI’s ethics committee (www.ai-elsi.org). ELSI is discussed in more detail below.

3 Fujii Taiyo went on to become one of seven members of the Visionary Council that formulated the futuristic goals of the government’s flagship Moonshot Research and Development Program, discussed below.

4 The fact that many of the same Japanese experts were involved across multiple AI ethics initiatives is suggestive of the relatively small pool of people working on these issues.

6 For the minutes of these meetings, see: https://www.cas.go.jp/jp/seisaku/jinkouchinou (accessed 21 July 2023).

7 For alternative views of values rooted in Japanese philosophical traditions that could help drive international AI ethics discussions, see for example (Berberich, Nishida, and Suzuki Citation2020; McStay Citation2021; Suzuki Citation2021).

8 It is worth noting that the use of ELSA (the equivalent acronym to ELSI that is commonly used in Europe, standing for “ethical, legal and social aspects”) has recently re-emerged in the European context, including in relation to AI particularly in the Netherlands (Ryan and Blok Citation2023).

9 The decline in the use of ELSI/ELSA in North America and Europe was largely due to a recognition of these limitations (see, for example, Williams Citation2006; Balmer et al. Citation2015; Adsit-Morris et al. Citation2023).

10 Many thanks to one of the anonymous reviewers for this observation.

11 Aitken (Citation2023) and others have made the similar point that recent rhetoric about the existential threat of future artificial general intelligence serves to distract from the already existing real-world harms of current applications of AI.

12 Strikingly similar wording is used in the 2017 Advisory Board on AI and Human Society report, which states that societal changes brought about by AI will result in “a new sense of ethics” (Cabinet Office Citation2017: 17).

13 Japan’s Fifth Generation Computer Systems project that spanned the 1980s can be seen as the exception that proved the rule in this bait-and-switch of techno-utopian rhetoric and commercialised reality. Garvey describes how the project embodied the utopian vision of the information society at a brief moment when AI development in Japan was not dominated by military or commercial interests (Garvey Citation2019: 622). While the utopian rhetoric around this project and around Society 5.0 are extremely similar, the pivotal difference is that by the time the project ended in 1993, national science and technology research and development regimes and educational infrastructure had shifted decisively toward a neoliberal paradigm of privatisation (Morris-Suzuki Citation1994: 218–219; Mirowski Citation2011: 134). As Garvey notes, the result was that despite succeeding relative to its initial goals of state-sponsored but internationalist open science for the public good, the project was interpreted as a failure within this new research and development paradigm because it did not generate new commercially lucrative technologies, and the dream of the information society gave way to the reality of a neoliberal information economy.

14 See: https://www8.cao.go.jp/cstp/english/moonshot/sub1_en.html (accessed 21 July 2023). Other goals of the program include developing technology to regrow lost limbs, enable direct brain-brain communication, control the weather, and make possible human hibernation for interplanetary space flight.

15 The composition of the committee involved in the “Conference toward AI Network Society,” mentioned above, appears to have been even more lopsided, with 95 percent men and 5 percent women. The list of participants is available here: https://www.soumu.go.jp/main_content/000499679.pdf (accessed 21 July 2023).

References

  • Acceptable Intelligence with Responsibility Study Group (AIR). 2018. “Perspectives on Artificial Intelligence/Robotics and Work/Employment.” Accessed July 21, 2023. http://sig-air.org/wp/wp-content/uploads/2018/07/PerspectivesOnAI_2018.pdf.
  • Adsit-Morris, Chessa, Rayheann NaDejda Collins, Sara Goering, et al. 2023. “Unbounding ELSI: The Ongoing Work of Centering Equity and Justice.” The American Journal of Bioethics 23 (7): 103–105. doi:10.1080/15265161.2023.2214055.
  • Advisory Board on AI and Human Society. 2017. “Report on Artificial Intelligence and Human Society.” Accessed July 21, 2023. https://www8.cao.go.jp/cstp/tyousakai/ai/summary/aisociety_en.pdf.
  • Aitken, Mhairi. 2023. “The Real Threat from AI.” New Scientist 258 (3445): 21. doi:10.1016/S0262-4079(23)01186-7.
  • Allison, Anne. 2013. Precarious Japan. Durham: Duke University Press.
  • Anzai, Yuichiro. 2021. Comments at Second French-German-Japanese Symposium on Human-Centric AI, January 2021.
  • Balmer, Andrew S., Jane Calvert, Claire Marris, Susan Molyneux-Hodgson, et al. 2015. “Taking Roles in Interdisciplinary Collaborations: Reflections on Working in Post-ELSI Spaces in the UK Synthetic Biology Community.” Science & Technology Studies 28 (3): 3–25. doi:10.23987/sts.55340.
  • Bareis, Jascha, and Christian Katzenbach. 2022. “Talking AI Into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics.” Science, Technology, & Human Values 47 (5): 855–881. doi:10.1177/01622439211030007.
  • Berberich, Nicolas, Toyoaki Nishida, and Shoko Suzuki. 2020. “Harmonizing Artificial Intelligence for Social Good.” Philosophy & Technology 33: 613–638. doi:10.1007/s13347-020-00421-8.
  • Cabinet Office, Government of Japan. 2017. “Report on Artificial Intelligence and Human Society (Unofficial Translation)”. Advisory Board on Artificial Intelligence and Human Society. Accessed July 21, 2023. https://www8.cao.go.jp/cstp/tyousakai/ai/summary/aisociety_en.pdf.
  • Casilli, Antonio. 2023. “L’intelligenza artificiale renderà sempre più precario il mondo del lavoro [Artificial intelligence doesn’t destroy jobs, it precaritizes them].” Domani, 24 March 2023. Accessed July 21, 2023. https://www.casilli.fr/2023/03/24/artificial-intelligence-doesnt-destroy-jobs-it-precarizes-them-op-ed-domani-march-24-2023/.
  • Conference Toward AI Network Society. 2017. “Draft AI R&D Guidelines for International Discussion.” Accessed July 21, 2023. https://www.soumu.go.jp/main_content/000507517.pdf.
  • Conference Toward AI Network Society. 2019. “AI Utilization Guidelines (Tentative translation).” Accessed July 21, 2023. https://www.soumu.go.jp/main_content/000658284.pdf.
  • Council for Science, Technology and Innovation (CSTI). 2020. “The Basic Approach for the Moonshot Research and Development Program.” Accessed July 21, 2023. https://www.jst.go.jp/moonshot/en/application/202002/pdf/f-policy_en20200227.pdf.
  • Council for Social Principles of Human-Centric AI. 2019. “Social Principles of Human-Centric AI.” Accessed July 21, 2023. https://www8.cao.go.jp/cstp/stmain/aisocialprinciples.pdf.
  • Ema, Arisa. 2018. “What is the Role of an STS Researcher?” Paper Presented at the 4S Conference, Sydney, April 18.
  • Ema, Arisa. 2019. “Oxford Conference Challenge: Realizing Diversity, Inclusion, and Sustainability.” Accessed July 21, 2023. https://arisaema0.wordpress.com/2019/12/04/oxford-conference-challenge-realizing-diversity-inclusion-and-sustainability.
  • Frumer, Yulia. 2018. “Cognition and Emotions in Japanese Humanoid Robotics.” History and Technology 34 (2): 157–183. doi:10.1080/07341512.2018.1544344.
  • Gal, Danit. 2020. “Perspectives and Approaches in AI Ethics: East Asia.” In The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 607–624. Oxford: Oxford Handbooks.
  • Garvey, Colin. 2019. “Artificial Intelligence and Japan’s Fifth Generation: The Information Society, Neoliberalism, and Alternative Modernities.” Pacific Historical Review 88 (4): 619–658. doi:10.1525/phr.2019.88.4.619.
  • Government of Japan. 2016. “The 5th Science and Technology Basic Plan.” Accessed July 21, 2023. https://www8.cao.go.jp/cstp/kihonkeikaku/5basicplan_en.pdf.
  • Government of Japan. 2019. “National AI Strategy.”
  • Government of Japan. 2022. “AI Strategy 2022.” Accessed July 21, 2023. https://www8.cao.go.jp/cstp/ai/aistratagy2022en_ov.pdf.
  • Grant, Nico, and Karen Weise. 2023. “In A.I. Race, Microsoft and Google Choose Speed Over Caution.” The New York Times, 7 April 2023. Accessed July 21, 2023. https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html.
  • Group of Seven (G7). 2023. “G7 Hiroshima Leaders’ Communiqué.” Accessed July 21, 2023. https://www.g7hiroshima.go.jp/documents/pdf/Leaders_Communique_01_en.pdf.
  • Gygi, Fabio. 2018. “Robot Companions: The Animation of Technology and the Technology of Animation in Japan.” In Rethinking Relations and Animism: Personhood and Materiality, edited by Miguel Astor-Aguilera and Graham Harvey, 94–111. London: Routledge.
  • Iida, Yoichi. 2021. Comments at Meeting of the Council of Europe Ad hoc Committee on Artificial Intelligence (CAHAI), January 2021.
  • Inoue, Yusuke, Tsunakuni Ikka, Seiya Imoto, Yuichiro Sato, et al. 2019. Iryō ni okeru AI kanren gijutsu no rikatsuyō ni tomonau rinriteki hōteki shakaiteki kadai no kenkyū [Research on Ethical, Legal, and Social Issues Associated with the Use of AI-Related Technologies in Medicine]. Accessed July 21, 2023. https://mhlw-grants.niph.go.jp/project/27001.
  • Inoue, Yusuke, Tsunakuni Ikka, Seiya Imoto, Yuichiro Sato, et al. 2020. Iryō ni okeru AI kanren gijutsu no rikatsuyō ni tomonau rinriteki hōteki shakaiteki kadai no kenkyū [Research on Ethical, Legal, and Social Issues Associated with the Use of AI-Related Technologies in Medicine]. Accessed July 21, 2023. https://mhlw-grants.niph.go.jp/project/27612.
  • Inoue, Yusuke, Tsunakuni Ikka, Seiya Imoto, Yuichiro Sato, et al. 2021. Iryō AI no kenkyū kaihatsu jissen ni tomonau rinriteki hōteki shakaiteki kadai ni kansuru kenkyū [Research on Ethical, Legal, and Social Issues Associated with the Research, Development, and Practice of Medical AI]. Accessed July 21, 2023. https://mhlw-grants.niph.go.jp/project/145598.
  • Ito, Kenji. 2007. “Astroboy’s Birthday: Robotics and Culture in Contemporary Japanese Society.” Paper Presented at the East Asian Science, Technology and Society Conference, Taipei, August 7.
  • Japan Computer Usage Development Institute (JACUDI), Computerisation Committee. 1972. The Plan for an Information Society: A National Goal Towards Year 2000 (English Translation). Change, July/Aug. 1972, 31.
  • Japan Medical Association (JMA). 2018. “Dai IX ji gakujutsu suishin kaigi hōkokusho jinkōchinō (AI) to iryō [9th Council for the Promotion of Science Report: AI and Medicine].” Accessed July 21, 2023. https://www.med.or.jp/dl-med/teireikaiken/20180620_3.pdf.
  • Japanese Society for Artificial Intelligence (JSAI). 2017a. “Ethical Guidelines.” Accessed July 21, 2023. http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf.
  • Japanese Society for Artificial Intelligence (JSAI). 2017b. “About the Japanese Society for Artificial Intelligence Ethical Guidelines.” May 3, 2017. Accessed July 21, 2023. http://ai-elsi.org/archives/514.
  • Jasanoff, Sheila, and Sang-Hyun Kim. 2009. “Containing the Atom: Sociotechnical Imaginaries and Nuclear Power in the United States and South Korea.” Minerva 47 (119): 119–146. doi:10.1007/s11024-009-9124-4
  • Kim, Dongwoo. 2020. “Advancing AI Ethics in Japan: A Q&A with Dr. Arisa Ema, Professor at University of Tokyo”. Asia Pacific Foundation of Canada. Accessed July 21, 2023. https://www.asiapacific.ca/publication/advancing-ai-ethics-japan-qa-dr-arisa-ema-professor.
  • Kishi, Nobuhito. 2011. Robotto ga nihon wo sukuu [Robots Will Save Japan]. Tokyo: Bungeishunjū.
  • Kozuka, Souichirou. 2019. “A Governance Framework for the Development and use of Artificial Intelligence: Lessons from the Comparison of Japanese and European Initiatives.” Uniform Law Review 24 (2): 315–329. doi:10.1093/ulr/unz014.
  • Liberal Democratic Party (LDP). 2023. “The AI White Paper: Japan’s National Strategy in the New Era of AI (English Translation)”. LDP Headquarters for the Promotion of Digital Society Project Team on the Evolution and Implementation of AIs. Accessed July 21, 2023. https://www.taira-m.jp/japan%27s%20ai%20whitepaper_summary_etrans.pdf/.
  • Matsuo, Yutaka, Toyoaki Nishida, Koichi Hori, Hideaki Takeda, Satoshi Hase, Makoto Shiono, and Hiromitsu Hattori. 2015. “Jinkōchinōgakkai rinri iinkaino torikumi [Introduction to the JSAI Ethics Committee].” Jinkōchinō [Artificial Intelligence] 30 (3): 358–364.
  • McQuillan, Dan. 2023. Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. Bristol: Bristol University Press.
  • McStay, Andrew. 2021. “Emotional AI, Ethics, and Japanese Spice: Contributing Community, Wholeness, Sincerity, and Heart.” Philosophy & Technology 34 (4): 1781–1802. doi:10.1007/s13347-021-00487-y.
  • Mikami, Koichi, Arisa Ema, Jusaku Minari, and Go Yoshizawa. 2021. “ELSI is Our Next Battlefield.” East Asian Science, Technology and Society: An International Journal 15 (1): 86–96. doi:10.1080/18752160.2021.1881279.
  • Ministry of Health, Labour, and Welfare (MHLW). 2017. “Hokeniryōbunya ni okeru AI katsuyōsuishin kondankai hōkokusho [Report from the Advisory Board for Promoting AI Utilization in the Healthcare Field].” Accessed July 21, 2023. https://www.mhlw.go.jp/file/05-Shingikai-10601000-Daijinkanboukouseikagakuka-Kouseikagakuka/0000169230.pdf.
  • Mirowski, Philip. 2011. Science-Mart: Privatizing American Science. Cambridge, Massachusetts, and London, England: Harvard University Press.
  • Mittelstadt, Brent. 2019. “Principles Alone Cannot Guarantee Ethical AI.” Nature Machine Intelligence 1: 501–507. doi:10.1038/s42256-019-0114-4.
  • Morris-Suzuki, Tessa. 1988. Beyond Computopia. New York: Routledge.
  • Morris-Suzuki, Tessa. 1994. The Technological Transformation of Japan: From the Seventeenth to the Twenty-First Century. Cambridge: Cambridge University Press.
  • Munn, Luke. 2023. “The Uselessness of AI Ethics.” AI and Ethics 3: 869–877. doi:10.1007/s43681-022-00209-w.
  • Robertson, Jennifer. 2018. Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation. Oakland: University of California Press.
  • Ryan, Mark, and Vincent Blok. 2023. “Stop re-Inventing the Wheel: Or how ELSA and RRI Can Align.” Journal of Responsible Innovation 10: 1. doi:10.1080/23299460.2023.2196151
  • Sakura, Osamu. 2022. “Robot and Ukiyo-e: Implications to Cultural Varieties in Human-Robot Relationships.” AI & Society 37: 1563–1573. doi:10.1007/s00146-021-01243-8.
  • Suzuki, Shoko. 2021. “East Asian/Japanese Perspective.” Presentation at UNESCO AI and Cultural Diversity Roundtable, 26 March, 2021.
  • Wagner, Ben. 2018. “Ethics As An Escape from Regulation. From ‘Ethics-Washing’ to Ethics-Shopping?” In Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen, edited by Emre Bayamlioglu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens and Mireille Hildebrandt, 84–89. Amsterdam: Amsterdam University Press.
  • Wang, Flora Y. 2020. “Cooperative Data Privacy: The Japanese Model of Data Privacy and the EU-Japan GDPR Adequacy Agreement.” Harvard Journal of Law & Technology 33 (2): 661–692.
  • White, Daniel. 2022. Administering Affect Pop-Culture Japan and the Politics of Anxiety. Stanford: Stanford University Press.
  • Williams, Robin. 2006. “Compressed Foresight and Narrative Bias: Pitfalls in Assessing High Technology Futures.” Science as Culture 15 (4): 327–348. doi:10.1080/09505430601022668.
  • Wright, James. 2023. Robots Won’t Save Japan: An Ethnography of Eldercare Automation. Ithica: Cornell University Press.
  • Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. London: Profile Books.