82
Views
0
CrossRef citations to date
0
Altmetric
Articles

A Tough Balancing Act – The Evolving AI Governance in Korea

, &
Pages 135-154 | Received 22 Feb 2023, Accepted 21 Apr 2024, Published online: 11 Jul 2024

Abstract

AI governance began to emerge as a focal point of discourse in Korea from the mid 2010s. Since then, multiple government and public bodies have released guidelines for AI governance, and various tech companies have announced their own AI ethics and governance principles. Beyond its borders, Korea has also actively participated in efforts to establish international norms and guidelines on AI governance. These developments were in part propelled by events that increased public awareness of the importance of trustworthy AI. This has also led to numerous legislative proposals that purport to increase technical and legal checks on such systems. At the same time, Korea has maintained its emphasis on the promotion of new technologies as a fundamental principle of legislative policy. Pursuing the dual goals of spurring innovation in the field of AI while at the same time ensuring its safety has led to confusion and, in some cases, incongruity within the AI governance framework in Korea. The Korean experience sheds light on factors that can influence the trajectory of a jurisdiction's AI governance, including public awareness of AI's potential for both benefit and harm, and the existing legislative and policy framework for governing technology.

1 Introduction

As artificial intelligence (AI) technology begins to permeate all aspects of our lives, pressing questions are being raised about the governance of AI systems. These include how to ensure the explainability of AI systems involved in making decisions that affect people’s lives, how to prevent AI systems from extending or perpetuating societal prejudices and biases, and whom to hold accountable when things go wrong (Coeckelbergh Citation2020; Crawford Citation2021; O’Neil Citation2017). In response, there has been a groundswell of interest in ethics principles and governance frameworks that purport to deal with such challenges, and there is now a pivot toward more concrete legislation that would directly address these concerns. A notable example of the latter is the Artificial Intelligence Act (AI Act) adopted by the European Union (EU), which seeks to regulate a broad spectrum of AI systems in alignment with EU values and its fundamental rights-based approach to regulating new technologies (Council of the European Union Citation2023).

The Republic of Korea (Korea) has also allocated significant policy resources to ensure and bolster the trustworthiness of AI systems. As we describe in more detail later, a national AI strategy was announced in 2019, followed by the government’s introduction of ethical standards in 2020, which set forth the essential components of trustworthy AI and key strategies to achieve them.Footnote1

Against this backdrop, we take a closer look at how AI governance has evolved in Korea. And in chronicling the country’s endeavor to ensure AI systems bring tangible benefits that society seeks through trustworthy processes and outcomes, we attempt to shed light on factors that can influence the trajectory of a jurisdiction’s AI governance. We also identify several critical events that significantly raised public awareness on such issues, which shifted the discourse and provided the impetus to tackle related challenges.

Before delving further into the topic, we should clarify what we mean by AI governance—a term used throughout this article. Some have used the term “governance” to denote a decentralized, bottom-up, and participatory problem-solving method in contrast to the traditional approach of government (Rhodes Citation1996). In the context of AI, however, the term has been more widely used as a concept that covers an array of means to address the harms or risks posed by AI, including regulatory interventions and legislation, rather than a narrowly defined democratic problem-solving method (Taeihagh Citation2021). We will also use the term in this broader sense to mean guardrails, both governmental and non-governmental, that are deployed to ensure the trustworthy and responsible development and use of AI.

2 The Evolving Discourse on AI Governance in Korea

Korea has been an active participant in AI governance discussions at the international level, including the Organization for Economic Cooperation and Development (OECD), the United Nations Educational, Scientific and Cultural Organization (UNESCO), and the AI Safety Summit. These efforts are in part an extension of domestic policy discussions led by the Korean government since the mid 2010s. While the government took the initiative in laying the groundwork for developing AI policy, academia, civil society, and industry have all made important contributions to AI governance discourse in Korea. In the following section, we provide an overview of the evolving discourse in more detail, covering both the public and private sectors. We show that: (i) Korea has often placed greater emphasis on the development and promotion of AI over the prevention of harm, at least initially, and (ii) public authorities have played an influential role in shaping the discourse.

2.1 Progress in the Public Sector

2.1.1 Contributions at the Domestic Level

2.1.1.1 Early Discussions (2016∼2018)

Around 2016, the Korean government commenced an active review of policies concerning “intelligent information technologies.” The term encompassed a host of emerging technologies including AI. In December that year, a group of government ministries led by the Korean Ministry of Science and ICT (MSIT) set forth an interdepartmental master plan for transitioning to an intelligent information society (Relevant Ministries of Korea Citation2016). The plan pointed to the importance of intelligent information technologies,Footnote2 including AI, for Korea’s digital transformation. And more pertinently, it identified setting up a proper ethical and legislative framework for such technologies as one of the main tasks to be undertaken. In the following year, the Presidential Committee on the Fourth Industrial Revolution (PCFIR) was established to implement the plan.

Later in 2018, as part of an effort to implement the plan, the MSIT, the Information Culture Forum, and the National Information Society Agency jointly released ethics guidelines and a charter for an intelligent information society (Information Culture Forum, National Information Society Agency Citation2018). The guidelines established a set of common principles for intelligent information technologies, dubbed the PACT principles (Publicness, Accountability, Controllability, and Transparency), and identified those responsible for implementing them as developers, providers, and users. The PACT principles, which stemmed from a government-led public-private collaboration, were Korea’s first official promulgation of AI-related ethics principles in the field.Footnote3

However, up to that point, the governance of AI itself was less of a focal point of the discourse. The early stages of AI policy in Korea tended to focus more on the development and growth of the technology. This focus continued with the Korean government’s emphasis of “D·N·A”—data, network, and AI—as the core of its industrial policy for the fourth industrial revolution (Relevant Ministries of Korea Citation2020b).Footnote4 viewing AI as one of the primary drivers. Accordingly, trustworthy AI as a policy objective tended to receive less attention.Footnote5 At the time, policymakers were more concerned that interventionist and pre-emptive regulation would stifle innovation in the field.Footnote6

2.1.1.2 National Strategy for Artificial Intelligence (2019)

The National Strategy for Artificial Intelligence (Relevant Ministries of Korea Citation2019b), announced in December 2019, was the first government initiative to specifically address the societal impact of AI.Footnote7 Under the slogan “Beyond IT Power towards AI Power,” the National Strategy aimed to both enhance economic vitality and address social problems by setting strategies and tasks in the following three areas: (i) building an AI ecosystem; (ii) strengthening AI utilization; and (iii) realizing human-centered AI. To realize these objectives, the role of the PCFIR was redefined as an AI-centered national committee, and detailed action plans were established jointly with related ministries. Notably, one of the suggested strategies emphasized that Korea should establish standards for ensuring trustworthy AI in line with global norms. Thereafter, discussions on AI governance began to emerge as a stand-alone national agenda. When suggesting the need for trustworthy AI standards, however, the National Strategy did not lay out specific elements or methods of AI governance. Such details would begin to take shape with the release of the country’s own ethical standards in the following year.

2.1.1.3 Human-Centered Artificial Intelligence Ethical Standards (2020)

After the release of the National Strategy, an expert group was organized to establish trustworthy AI standards. The group analyzed major trustworthy AI principles both within and outside the country and published a draft document suggesting how to implement the principles. The group then collected opinions from various stakeholders for a year. On 23 December 2020, at the plenary meeting of the PCFIR, an interagency guideline named the Human-centered Artificial Intelligence Ethical Standards (HcAIES) was jointly set forth by all ministries to propose directions for developing human-centered AI. (Relevant Ministries of Korea Citation2020a). The HcAIES consisted of three basic principles and ten key requirements that should be applied to the entire process of developing and utilizing AI. While not legally binding, the standards were also designed to serve as a forum for various stakeholders to discuss emerging governance issues in the lifecycle of AI.Footnote8

The actual principles and requirements set forth in the HcAIES themselves were relatively conventional in that they essentially tracked guidance promulgated by other public and private entities around the world. What set the HcAIES apart from other similar guidance was its clear focus on fostering the technology and related industry together with an explicit endorsement of a self-regulatory approach for that purpose. The HcAIES stated its aim as “creating a self-regulatory environment in the industrial and economic sectors, avoiding constraints on both AI R&D and industrial growth, while refraining from imposing unfair burdens on companies pursuing legitimate profits.” The fact that the government explicitly stipulated the development of AI technology and industry as a primary goal of its trustworthy AI standards indicated a promote-first, regulate-later approach, which has continued as an undercurrent of AI governance in Korea.

2.1.2 Contributions at the International Level

As noted above, Korea has actively participated in international deliberations on AI governance. One of Korea’s most noteworthy contributions to global AI governance is its engagement at the OECD regarding its AI Principles. In May 2019, the OECD put forth its Recommendation of the Council on Artificial Intelligence, outlining five principles for AI. This stands as one of the earliest international guidelines crafted to enhance the robustness, safety, fairness, and trustworthiness of AI. (OECD Citation2019). Korea played an active role in negotiating and drafting the principles. Korea chaired the OECD Expert Group on AI (AIGO), which was composed of over fifty experts from different disciplines and sectors, including government, industry, civic society, trade unions, the technical community, and academia, that were brought together to draft the principles. (OECD, AI Policy Observatory Citation2019).

In June 2020, an international multi-stakeholder initiative called the Global Partnership on AI (GPAI) was launched to implement the OECD AI Principles. Korea was one of the fifteen founding members, along with Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, Singapore, Slovenia, the EU, the United Kingdom (UK), and the United States (US) (MSIT Citation2020).Footnote9

Further in 2021, Korea participated in the OECD Ministerial Council Meeting as vice-chair of the Council, leading discussions on international cooperation to establish global policies for fair and inclusive distribution of benefits of digital technologies including AI. Prior to the meeting, the MSIT also hosted a joint seminar with the OECD titled “The OECD Principles on Artificial Intelligence, Progress over the Past Two years and Future Directions,” emphasizing the importance of trustworthy and human-centric AI (OECD WebTV Citation2021).

In May 2022, Korea became one of the eleven vice-chairs for the OECD’s Working Party on AI Governance (WPAIGO), which was established to assist the OECD Committee on Digital Economy Policy in facilitating and supporting its work on AI policy (OECD Citation2022). Korea participated in the Committee and all five working parties under it, and is expected to continue its participation in the global discussions on AI governance and other relevant areas such as network infrastructure and data governance (MSIT Citation2022a).

Another notable contribution by Korea involved UNESCO. The UNESCO Recommendation on the Ethics of Artificial Intelligence, which was adopted by 193 Member States at UNESCO’s General Conference in November 2021, is regarded as one of the most comprehensive frameworks for global AI policy, addressing various emerging issues (UNESCO Citation2021). As in the case of the OECD AI Principles, Korea played an active role in drafting and developing the Recommendation. For example, in July 2020, Korea organized the Virtual Regional Consultation for the Asia and Pacific Region on the first draft of the Recommendation on the Ethics of Artificial Intelligence jointly with UNESCO, where participants engaged in discussions on values, principles, and policy tasks regarding the Recommendation (UNESCO Citation2020). And in December 2022, Korea participated in the UNESCO Global Forum on the Ethics of Artificial Intelligence, the first international meeting to take place after the adoption of the Recommendation. At the forum, Korea engaged in discussions on the challenges and future directions of AI regulations and frameworks, introduced its 2020 HcAIES, and emphasized the importance of and need for closer global cooperation to build international standards for trustworthy AI (MSIT Citation2022b).

Currently, a particular area of focus for AI governance involves the emergence of generative AI and large-scale foundation models, exemplified by ChatGPT. The rapid advancement in the capabilities of large-scale models and the purported strides made toward achieving artificial general intelligence has heightened concerns over the associated risks to society. In response, representatives from 28 countries including Korea convened in the UK in November 2023 for the inaugural AI Safety Summit, which culminated in the issuance of the Bletchley Declaration. And it was agreed that Korea would co-host the next global summit in 2024 (Office of the President, Korea Citation2023).

One might question whether Korea’s active participation in global AI governance discussions created tensions with its early emphasis on the promotion of AI technology over overt regulation. However, much of the discourse at the international level so far has yielded recommendations that primarily rely on voluntary compliance without an effective enforcement mechanism to sanction deviant behavior. This has put less pressure on Korea to fundamentally change its stance on AI policy. In fact, Korea’s active participation provided it with the opportunity to voice its doubts about the propriety of preemptively placing stringent obligations on the AI industry to prevent risks that some viewed as ill-defined. This does not mean, however, that Korea has been disinterested in ensuring the safe development and use of AI through normative guidance. As we discuss later in this article, multiple legislative proposals to strengthen AI governance and protect users from AI-related harms have come under consideration from 2021.

2.2 Progress in the Private Sector

While there has been criticism of the direction and efficacy of the Korean government’s initiatives on AI policy, particularly from civil society, the government has played an important role in facilitating and shaping discourse on AI governance in the private sector.Footnote10 In the following sections, we explore how AI governance discussions evolved in the private sector and how such discourse was influenced by government policy.

2.2.1 Contributions by Academia and Civil Society

From the mid 2010s, multiple initiatives have been launched to drive and encourage academic research on AI governance and raise public awareness on related issues. And civil society has been an active participant in these discussions. For example, in November 2020, the Consumers Union of Korea published its Consumer Bill of Rights in the Age of AI, emphasizing the importance of human-centered development of AI technologies. Developed by experts from various fields over a year of rigorous discussions, it detailed eight principles for consumer rights protection in the age of AI: (i) inclusiveness; (ii) fairness; (iii) non-discrimination; (iv) safety and trustworthiness; (v) transparency; (vi) personal data protection; (vii) accountability; and (viii) ensuring relief for damage (Consumers Union of Korea Citation2020). Further in May 2021, 120 NGOs across the country issued a joint declaration demanding the government and the National Assembly to devise and implement AI policies and legislations that would guarantee human rights, safety, and democracy (Citation120 Civic Groups Citation2021). Such actions from academia and civil society were in part prompted by critical views of government policy on AI governance and were an effort to suggest better alternatives to those proposed by the government.

2.2.2 Contributions by Industry

Ensuring the trustworthiness of AI systems is now emerging as a focal point in discussions surrounding corporate governance and social responsibility globally (Felländer Citation2022). For companies that develop and/or rely on AI systems for business-critical purposes, the proper governance of AI has become an operational issue with real world ramifications and costs. Accordingly, more and more companies in Korea are investing the time and resources to avoid AI-driven harmful outcomes that erode consumer trust in their services or products, making efforts to put in place the necessary internal structures and processes to do so. As with the global discourse surrounding AI governance, these efforts initially took the form of technology industry leaders establishing ethics principles for the development and use of AI. Now, the focus has shifted towards implementing the espoused principles in day-to-day operations, with smaller companies and start-ups also beginning to realize the benefits of proactively addressing such challenges rather than falling behind.Footnote11 Notable examples of AI ethics principles or governance frameworks announced by the tech industry in Korea include the following:

  • Kakao, one of Korea’s largest digital platforms that operates the country’s leading social communication and messaging app, in January 2018 announced the Kakao Algorithm Ethics Charter, the industry’s first formal policy for the ethical development of AI services and products. More recently, Kakao established an internal body named the Technology Ethics Committee, tasked with ensuring that Kakao’s services comply with AI norms, systematically evaluating potential risks, and strengthening the trustworthiness of its algorithm systems (Artificial Intelligence Times Citation2022b).

  • Samsung Electronics announced its own set of AI Ethics Principles, focusing on fairness, transparency, and accountability, in 2019. Samsung has also established detailed internal guidelines to put the principles in practice. The company has further announced plans to provide training programmes for employees to raise company-wide awareness of AI ethics (Samsung Electronics Citation2019).

  • NAVER, another leading digital platform that operates Korea’s most popular portal and search engine, published its AI Ethics Principles in February 2021 in collaboration with Seoul National University Artificial Intelligence Policy Initiative (SAPI) (NAVER Citation2021a). Later that year, the company issued its inaugural AI report with SAPI, detailing the principles and publicly sharing NAVER’s experiences and the challenges it faced in formulating them (NAVER Citation2021b). In December 2022, a follow-up report was released, explaining specific case studies of how the principles have been implemented in various services and products by NAVER (NAVER Citation2022).

  • SK Telecom, Korea’s leading mobile carrier, announced in May 2021 a set of seven values that the company will pursue in developing human-centered AIbrP1' technologies (SK Telecom Citation2021b). The company has further developed a checklist to apply these seven values in every product lifecycle, and has updated its risk management framework to enable an immediate response to any pertinent ethical issues (SK Telecom Citation2021a). Furthermore, in January 2024, it announced the launch of an AI governance system to establish standards, define roles and dedicated organizations, and implement processes for effective governance of AI (Artificial Intelligence Times Citation2024).

  • LG has also actively engaged with the AI governance discourse through its research branch, LG AI Research. In August 2022, LG announced its AI Ethics Principles consisting of five key pillars (LG AI Research Citation2023), and has since launched a new AI ethics group within the company. For this purpose, LG AI Research set up an AI ethics inspection task force to educate its employees on AI ethics and identify potential ethical issues that may arise during the development and research of AI in advance. The task force, in collaboration with the newly established AI Ethics Working Group by LG AI Research and 10 LG affiliates, plans to formulate detailed AI ethics guidelines for each affiliate. Recently, LG AI Research signed a letter of intent with UNESCO to implement and disseminate AI ethics (UNESCO Citation2023).

While businesses have taken their own initiative in developing internal guardrails and processes for trustworthy AI, these efforts have been aided by a government policy that has encouraged self-regulation. The Korean government has further supported such efforts by beginning to issue sector-specific trustworthy AI development guidebooks for use by the relevant industries (MSIT, Telecommunications Technology Association Citation2023). Similarities between the internal guidelines and practices adopted by Korean companies and those set forth in guidance by the government underscores the influence government policy has had on AI governance within the industry.

As described above, AI governance in Korea has seen meaningful contributions from both the public and private sector. However, the account we have given also shows the outsized role that the government has played so far.Footnote12 Even when encouraging self-regulation, the Korean government has often proactively nudged the industry to move towards its policy goals through a mix of incentives, administrative guidance, and direct dialogue with relevant stakeholders. But for government policy to be truly effective when playing such a hortatory role, support and buy-in from the general public is also critical for its success. As discussed in more detail below, Korea experienced a series of events that shifted public opinion and had a significant impact on the direction of AI governance discourse and policy for the country.

3 The Turning Points in Koreas AI Governance Discourse

Among the numerous milestones that shaped AI governance in Korea, there were three significant events that served as turning points in the relevant discourse over the past two decades: (i) the Robot Ethics Charter of 2007, (ii) the AlphaGo match in 2016, and (iii) the Lee Luda scandal in 2020. It was the last two events that jolted public awareness about the hopes and risks of AI’s transformative nature for Korean society.

3.1 The Forgotten Charter on Roboethics

The roots of Korea’s trustworthy AI discourse can be traced back to 2007 (Shim Citation2007), when the government drafted the Robot Ethics Charter pursuant to Article 18 of the Intelligent Robots Development and Distribution Promotion Act (IRDDPA) that would soon come into effect. At the time of drafting, “roboethics,” a term coined by Gianmarco Veruggio in 2002, had become a topic of tech policy discussions. The World Robot Declaration had been announced at the 2004 World Robot Fair in Fukuoka, Japan (International Robot Fair Citation2004), followed by the European Robotics Research Network’s release of its Roboethics Roadmap in 2006 (Veruggio Citation2006).Footnote13 The Korean government hoped that the charter’s enactment would help Korea’s robot industry by establishing an ethical foundation for future growth (Electronic Times Citation2007). Accordingly, the drafting of the charter aligned with the promote-first characteristic of Korean AI governance mentioned above.

The Robot Ethics Charter was arguably one of the first examples of AI-related normative guidance in the form of a charter worldwide. However, during the decade that followed, there were limited, if any, serious discussions on the draft itself. Despite the widespread installation and use of robots in Korea,Footnote14 as well as a concerted effort by successive governments to increase the use of intelligent robots in industry (Korea Institute for Robot Industry Advancement Citation2019), robot governance failed to garner sufficient attention to propel further policy and legislative action. An important reason may have been that industrial robots, rather than those that directly face consumers such as care robots, have accounted for the vast majority of all robots used in Korea.Footnote15 The limited opportunities for the public to interact with robots in their daily lives may have somewhat impeded the development of robot governance. Additionally, as AI came to the forefront of public imagination, topics that could have been discussed in the context of robot governance ended up being subsumed under the discourse of AI governance.

The IRDDPA, which provided the legal mandate for the charter, had a sunset clause that expired in 2018, ten years after its enactment. Sensing that time was running out, some scholars tried to harness the heightened social interest in the fourth industrial revolution to generate support for adopting the draft charter, even releasing a revised version of the draft in 2018 (Choi et al. Citation2019). The IRDDPA ended up being extended for another ten-year term, until 30 June 2028. However, to this day the charter still remains in draft form with no formal plan for adoption.

3.2 The Shocking Triumph of Alphago

Public interest in emerging technologies such as big data analytics, the Internet of Things, and AI received a boost in Korea when the idea of an impending fourth industrial revolution was popularized at the 2016 Davos Forum. It was a game of go in March 2016, however, pitting the reigning Korean champion Lee Sedol against DeepMind’s AlphaGo, that jolted the public’s perception of AI’s power to transform society.

Go is a popular board game in Korea, and Lee Sedol had been a dominant player on the professional stage both in Korea and internationally for over a decade. Although IBM’s Deep Blue had mastered chess in 1997, the general view at the time was that AI was not yet ready to beat one of the world’s best human professional go players because of the game’s complexity, given the far greater number of possible moves compared to chess. When AlphaGo trounced Lee, a go world champion, with a score of 4:1, the shock felt by the Korean public was palpable and led to fascination with the technology’s potential.

It would not be an exaggeration to say that the AlphaGo match was one of the most significant triggers for full-fledged discussions on AI governance in Korea. While there was some trepidation, the overall impact on the public’s perception of AI was positive, and this was also reflected in policy documents following the match (Policy Briefing Citation2016). The public would only come face-to-face with the technology’s harmful potential through the Lee Luda incident five years later.

3.3 The Scandalous Chats by Lee Luda

Lee Luda was a BERT-based conversational agent designed to have a twenty-year-old female college student’s persona. Its developer, ScatterLab, trained a natural language understanding (NLU) model from real human conversations, which retrieved responses from a session database consisting of sentenced dialogue of women in their twenties. After attracting hundreds of thousands of users in less than a month following its launch in December 2020, Lee Luda came under criticism for its generation of toxic comments involving, among other things, gender discrimination and bias, and unconsented disclosure of personal data. ScatterLab first suspended the service, and then eventually decided to scrap the model amid growing controversy over Lee Luda’s problematic comments.

In January 2021, the Personal Information Protection Commission (PIPC), the country’s privacy authority, launched a probe into ScatterLab’s potential violation of the Personal Information Protection Act (PIPA). Following the investigation, in April 2021 the PIPC imposed a host of corrective measures and administrative fines on ScatterLab. The company was found to have collected personal information, such as user IDs and/or messaging app conversations, through the company’s other services without due consent from the data subjects. The company had then used the information as training data for the NLU model and session database in developing Lee Luda in violation of PIPA.

Based on an amendment to PIPA in 2020, ScatterLab was allowed to process pseudonymized data without the consent of data subjects for scientific research purposes. However, the PIPC determined that both the training of the NLU model and the retrieval of responses from the session database did not qualify as legitimate processing of pseudonymised data. This was both because ScatterLab had not made sufficient efforts to pseudonymize personal data contained in the dialogues, and because its retrieval of entire sentences of dialogue from the session database and subsequent broadcasting of such data through a chatbot did not constitute a scientific research purpose (Personal Information Protection Commission of Korea Citation2021).

Following the PIPC's decision, ScatterLab developed a newly upgraded model to resume its service. In early 2022, the company announced new ethics principles, privacy policies, and abusive response policies for its AI chatbot service. During a series of beta tests, it introduced an abuse detection model and a penalty system for safe conversations, resulting in the beta model reportedly attaining a 99.78% level of non-toxicity on average (ScatterLab Citation2022). In October 2022, ScatterLab launched Lee Luda 2.0, which is based on a generative model rather than a retrieval model (in other words, it assembles words to generate responses, rather than simply retrieving whole sentences), and is now more likely to qualify for the scientific research purpose requirement under PIPA.

Lee Luda was the first case where the authorities took action to correct a failure of AI governance as a violation of current law in Korea. The incident received extensive media coverage, which raised public awareness about the harms and risks associated with the technology. But more than anything else, it served as a clarion call to the Korean tech industry, prompting companies large and small to think about AI governance and re-evaluate their business practices to meet public demands for responsibility and accountability in the development and use of AI. So far, the response from the industry has tended to take the form of individual initiatives by companies described earlier rather than uniform sector- or industry-wide self-regulation, which is perhaps unsurprising given the market upheaval triggered by the spread of AI technology. The scandal and the resulting public outcry also prompted the Korean government to put more emphasis on the trustworthy development and use of AI. However, this did not imply a fundamental shift away from the policy that preferred a more flexible and tailored self-regulatory approach to direct and pre-emptive regulation. As we discuss further below, the government’s efforts to pursue the competing objectives of spurring innovation in AI while at the same time ensuring its safety have led to dissonance within the AI governance framework among policy priorities and, in some cases, incongruity within governance proposals.

4 Charting a Future Course for AI Governance in Korea

In general, Korea has taken a cautious approach when it comes to regulating AI, being concerned with impeding the technology’s development or inadvertently picking winners through government regulation. Proposals and actions for AI governance so far have primarily relied on voluntary commitments to self-determined principles and guidelines, rather than direct legislation or regulatory action. However, doubts about the efficacy of relying solely on tech companies to self-regulate (Floridi Citation2021), fuelled further by the Lee Luda experience, have led to legislative proposals that would entail more scrutiny and stricter regulation of AI. The increased interest in regulating AI has yet to culminate into actual legislation, however, in part reflecting uncertainty and disagreement on the warranted scope, content and timing of AI regulation.

4.1 Competing Legislative Objectives for AI Governance

Following the AlphaGo match with Lee Sedol in 2016, several legislative bills were put forth in the Korean National Assembly. Some of the bills, however, were vague on what they set out to achieve or stated multiple objectives that conflicted with one another.Footnote16 Others appeared to lack the proper legislative design and content to achieve their stated goals. There were also concerns that the bills conflicted with existing laws. As the discourse surrounding AI governance matured, new legislative bills listed under emerged with a sharper focus on AI. But, as seen from their titles, the bills tended to put more emphasis on the fostering and development of AI technology than its responsible governance, following in the steps of the government’s prior ICT policy. None of the bills passed muster in the legislature, and they all lapsed with the conclusion of the National Assembly’s 20th session in May 2020.

Table 1 Legislative bills proposed during the 20th National Assembly

During the first half of the following 21st National Assembly session, multiple legislative bills that dealt with AI were again set forth, as listed under , outnumbering those proposed in the entire previous session. And in September 2022, the Korean government, which under the Korean Constitution can independently propose legislative bills, announced plans to propose a framework act on AI as part of its Korea Digital Strategy to create a safe and inclusive digital society built on the initiative and innovation of the private sector (Relevant Ministries of Korea Citation2022). Legislators have also taken notice of the EU's AI Act, and its governance design and content are receiving close scrutiny in Korea.

Table 2 Legislative bills proposed during the 21st National Assembly

Among the legislative bills proposed in , the two most recent—the Act on Artificial Intelligence and the Act on Algorithms and Artificial Intelligence—are notable in that they pursue the dual goals of (i) promoting and attaining AI’s potential for economic growth and benefits on one hand, while at the same time (ii) establishing the necessary guardrails to avoid or mitigate AI’s potential to generate or amplify societal harms and risks.Footnote17 At first glance, setting these two imperative goals as legislative objectives seems straightforward and uncontroversial. Despite sharing similar titles, however, the two bills exhibit fundamental differences in how they would approach these objectives. Such differences reflect the diverging views on AI governance following the Lee Luda incident in late 2020.

The former bill (National Assembly of Korea Citation2021b) clearly prioritizes the first objective. Chapters 2 and 3 of the bill, which account for nearly three-quarters of its entirety, focus on the development and use of AI. Chapter 4 of the bill purports to address issues related to trustworthy AI. But the few provisions that deal with AI governance are somewhat abstract, and it is unclear whether they would establish concrete and actionable legal obligations for the bill’s subjects. The bill only requires that AI developers “endeavor” to achieve trustworthiness in AI and stops short of stipulating sanctions for failing to comply with the provisions related to AI governance. The bill thus tracks the cautious approach to regulating AI and is more in line with the prior ICT policies.

In comparison, the latter bill (National Assembly of Korea Citation2021a) which came later, is strikingly different in its approach. Article 5 states inclusiveness and non-discrimination, trustworthiness and transparency, and the fair guarantee of rights and relief for damages as the basic principles of AI development. Article 6 stipulates specific legal obligations on the part of AI developers and business operators. And, comparable to the EU’s AI Act, Chapters 4 and 6 of the bill set forth special requirements for “high-risk AI,” such as establishing a deliberative committee, imposing duties on developers and business operators, and granting rights to users. The bill defines “high-risk AI” as AI that has a significant impact on lives, safety, and fundamental rights, as categorized in .

Table 3 High-risk AI categories in the proposed Act on Algorithms and Artificial Intelligence

Subsequent legislative proposals have continued to reflect these competing objectives, sometimes creating incongruity within the bill itself.

In 2022, the bill on Fostering and Securing Trustworthiness of the Artificial Intelligence Industry (National Assembly of Korea Citation2022) was proposed, which retained the concept of high-risk AI. However, the bill took a softer approach than the proposed Act on Algorithms and Artificial Intelligence by limiting the obligations of high-risk AI operators to notification requirements, and placing on the government the responsibility for establishing infrastructure to enhance trustworthiness. More significantly, the bill explicitly stated “preferential permission and ex-post regulation” as a fundamental tenet of AI governance, which signaled a willingness to tolerate and take on acceptable risks. The bill’s wording that stipulated a preference for permitting the use of AI technology and dealing with adverse effects later over pre-emptively blocking its deployment came directly from Article 5–2 of the Framework Act on Administrative Regulation of 2018. This policy position was carried over into a new legislative bill that was introduced in 2023,Footnote18 which purported to consolidate the various pending bills including the three discussed above. The consolidated bill, however, came under heavy criticism from both industry, which objected to the bill’s incorporation of stringent high-risk AI requirements, and civil society groups, which strongly opposed the bill’s permissive stance and preference for ex-post regulation. Consequently, the bill has failed to progress within the National Assembly and its fate remains uncertain.

While the consolidated bill stalled in the National Assembly, yet another set of bills were proposed. The bill on Artificial Intelligence Accountability (National Assembly of Korea Citation2023a) stipulates a wide range of duties for business operators and rights for users. And the definition of high-risk AI in the bill remains broadly consistent with the proposed Act on Algorithms and Artificial Intelligence, while the scope and content of the duties of high-risk AI operators have been tightened further. However, the bill limits the sanctions for breaching such duties to civil penalties rather than administrative sanctions or criminal punishment. A separately proposed bill on Artificial Intelligence Accountability and Regulation (National Assembly of Korea Citation2023b). Seemingly takes an even more stringent approach to AI governance. The bill appears to underscore the precautionary principle by adding new categories of high-risk AI and extending prohibitions beyond the high-risk AI categories. The bill also introduces criminal sanctions on those involved in developing or using prohibited AI. However, the bill includes a provision referencing “preferential permission and ex-post regulation,” which seems incongruous with the bill’s precautionary approach to AI. Again, this is a reflection of the continuing ambivalence in Korea, and in part the difficulties of balancing the dual objectives of spurring innovation in the field of AI while at the same time ensuring its safety.

4.2 Korea’s AI Governance at a Crossroads

As discussed above, the AI governance discourse in Korea has evolved over the years to the point where dissonance between policy objectives is becoming more apparent. This shift in the discourse is not the result of changes in political leanings or leadership, but the result of an evolving understanding of the benefits and risks of AI that were prompted by events that captured the public imagination and raised public awareness on AI governance-related issues.

There is an ongoing debate within academia and policy circles in Korea on what is the right approach to AI governance for the country. There are voices that advocate for a more proactive and precautionary approach to AI risks, pointing to the EU’s AI Act as a model. Some have pushed back, arguing that such an approach could lead to either excessive or inadequate regulation by categorizing types of AI as prohibited or high-risk without sufficient empirical evidence (Novelli et al. Citation2023; Park Citation2024). Yet others have been critical of the incongruity among and within policy proposals aiming to prevent AI risks while at the same time tolerating such risks by preferring ex-post regulation or forgoing necessary sanctions (National Human Rights Commission of Korea Citation2023).

The challenge for legislators and regulators alike will be finding the optimal mix of type, form, and content of legislative and policy action that in an ideal world reasonably achieve both objectives in the long term, while in reality avoiding the material impairment of either when having to prioritize one over the other. Korea has often shown a willingness to incorporate proven regulatory frameworks or laws from other more experienced jurisdictions. But as it pertains to AI governance, we expect the National Assembly to be more inclined to carefully consider the merits and costs of any proposal, rather than simply adopting a framework from another jurisdiction that may reflect differences in economic and geopolitical conditions. This will require both bold initiative and shrewd restraint, and the political will to experiment undergirded by public support to sustain it. In doing so, Korea may be able to continue its contribution to a global discourse on AI governance that will enable human society to attain the true beneficial potential of AI.

Acknowledgements

The authors would like to thank Sangchul Park, Haksoo Ko, and the anonymous reviewers for their insights and helpful comments. All authors contributed equally to this manuscript.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2022R1A5A7083908).

Notes on contributors

Do Hyun Park

Do Hyun Park is an assistant professor at Gwangju Institute of Science and Technology (GIST) AI Graduate School. His research focuses on establishing trustworthy artificial intelligence governance. He earned a Ph.D. in Law from Seoul National University (Korean Lawyer) and won first place in the Hong Jin-ki Legal Research Award.

Eunjung Cho

Eunjung Cho is a Ph.D. student at ETH Zürich. Before joining ETH, she was a Research Assistant at the Artificial Intelligence Institute of Seoul National University. She holds an MS in Computational Analysis and Public Policy (University of Chicago) and a BSc in Philosophy and Economics (London School of Economics).

Yong Lim

Yong Lim is an Associate Professor at Seoul National University (SNU), School of Law. He is the Director of the Center for Law and Economics at SNU and leads its AI lab - SNU Artificial Intelligence Policy Initiative. His areas of specialty include antitrust, consumer protection, and tech law & policy. Yong graduated from Seoul National University, College of Law, and obtained his S.J.D. at Harvard Law School.

Notes

1 In recognition of its commitment to trustworthy and human-centric AI, Korea was ranked tier 1 in the AI and Democratic Values Index 2022 (Centre for AI and Digital Policy Citation202Citation3).

2 The plan defined intelligent information technology as “a technology that implements human high-dimensional information processing through Information and Communication Technology (ICT), which is a combination of intelligence implemented by AI and information based on data and network technology.”

3 The content of the PACT principles is specified in Article 62(1) of the Framework Act on Intelligent Informatisation.

4 Other jurisdictions such as the EU have also recognised AI’s potential as a driver of economic growth and competitiveness: European Commission (Citation2021).

5 For example, government reports published in May 2018 and January 2019 jointly by the relevant ministries stress the promotion of technological development, while giving relatively little attention to policies that tackle ethical challenges and increase the social acceptability of AI systems: MSIT Citation2018; Relevant Ministries of Korea (Citation2019a).

6 The actual impact regulation has on innovation—whether it spurs innovation by engendering trust in the relevant technology and market or whether it has a chilling effect by raising costs for entrepreneurs and discouraging risk taking—is a hotly debated topic: Thun (Citation2024). The answer will naturally be context specific, i.e., tied to the historical, socioeconomic and political conditions of the relevant jurisdiction.

7 Around that time, the Korea Communications Commission and the Korea Information Society Development Institute also jointly announced the “Principles for a User-centred Intelligent Information Society,” which consisted of seven fundamental principles: Korea Communications Commission (Citation2019).

8 In May of the following year, a more detailed strategy to realise these ethical standards was announced: Relevant Ministries of Korea (Citation2021). Specifically, three strategies were proposed to realise the HcAIES: (i) providing education to citizens as well as researchers and students, (ii) designing a checklist for researchers, developers and users to evaluate their compliance with ethical standards, and (iii) running a public forum where various members of society can participate to discuss AI governance.

9 The GPAI aims to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth, and to contribute to achieving the United Nations Sustainable Development Goals.

10 The government took a similar approach for the telecom industry in Korea, which is reflected in the legislative goals of the relevant ICT laws. For example, the Telecommunications Business Act states its purpose as “encouraging the sound development of the telecommunications business.”

11 A prominent example is ScatterLab: ScatterLab Citation2022.

12 The Korean government has continued to spearhead efforts AI ethics and related education: Artificial Intelligence Times (Citation2022a).

13 Roadmap version 1.2 was released the following year, but there were no significant differences in content from the earlier version. These discussions were reportedly based on science fiction novelist Isaac Asimov’s “Three Laws of Robotics.” In the 1942 novel Runaround, Asimov proposed 3 laws with hierarchical structures, which included the prevention of harm to humans, obedience to humans, and protection of robots themselves. Later, in the 1985 novel Robots and Empire, the zeroth law, prevention of harm to humanity, was added to solve various problems of the existing laws.

14 In 2022, for industrial robots, Korea was the fourth largest market in terms of annual installations, after China, Japan, and the US: International Federation of Robotics (Citation2023).

15 In 2022, sales of industrial robots in Korea amounted to KRW 2,974.7 billion, while the number for service robots was KRW 982.3 billion: Ministry of Trade, Industry and Energy, Korea Institute for Robot Industry Advancement, Korea Association of Robot Industry Citation2023.

16 Some of the bills took the form of a “framework act,” which is a type of umbrella law designed to provide a basis for and inform multiple secondary laws and policy objectives: Park (Citation2006).

17 This is similar to the two objectives outlined in the EU’s White Paper on Artificial Intelligence, which advocates for ecosystems of excellence and trust: European Commission (Citation2020).

18 The original text of the bill has not yet been disclosed to the public.

References