6,817
Views
3
CrossRef citations to date
0
Altmetric
Research Articles

AI ≥ Journalism: How the Chinese Copyright Law Protects Tech Giants’ AI Innovations and Disrupts the Journalistic Institution

ORCID Icon, ORCID Icon & ORCID Icon

Abstract

Journalism and other institutions clash over automated news generation, algorithmic distribution and content ownership worldwide. AI policies are the main mechanisms that establish and organise the hierarchies among these institutions. Few studies, however, have explored the normative dimension of AI in policymaking in journalism, especially beyond the West. This case study inspects the copyright law’s impact on AI innovation in newsrooms in the unexamined Chinese context. Using neo-institutional theory and policy network theory, the study investigates the Third Amendment to the Chinese Copyright Law, exemplary court cases regarding automated journalism copyright disputes (such as Tencent v. Yingxun and Film v. Baidu), and other supporting documents. The findings show how China’s copyright legal framework separates authorship and ownership; defines “originality” and “creativity” in human-machine collaboration; and prioritises tech companies while undermining journalistic autonomy. We argue that the law’s eager embrace of AI may give tech companies an advantage over news organisations that do not necessarily have a strategy to adopt AI. Moreover, it favours state-owned, resource-rich official media over the private sector. An implication of this shifting power dynamic is the possibility of privately owned news media being marginalised, resulting in even stronger state control over media production and information flow.

Introduction

In 2015, when Chinese tech giant Tencent’s newswriting bot Dreamwriter debuted, many claimed this was the “end of the road for journalists” (He Citation2015). In 2017, when AI-powered Chinese news aggregator Toutiao became “one of the world’s hottest start-ups” (Macfarlane and Wang Citation2017), it quickly established and showed “global ambitions.” In 2019, a Chinese court granted copyright protection for an article written by Tencent’s Dreamwriter news-writing bot, claiming that the human intellectual activities of the AI program’s creators extend to the works written by the software. With this, China has become a leader in the use of AI in journalism and is one of the first countries in the world to set a court case precedent that protects copyrights for AI-generated works.

Extending copyright protection for non-human creation, in our case, automated news, has implications for the institution of journalism. A handful of studies have addressed the legal issues raised by automated journalism. When Weeks (Citation2014) predicts the courts would favour the data entrant to the programmer in assigning automated news copyright, it draws from a US copyright law tradition where the courts favour economic, rather than personal, rights of authors. When Díaz-Noci (Citation2020, 8) claims that the “role of intellectual property law when applied to the outputs of automated journalism” should consider that “developing artificial intelligence systems could help journalistic work for gathering data, elaborating news and disseminating them—even to commercialise them more efficiently”, it situates law within a market-driven context. Similarly, when Lewis, Sanders, and Carmody (Citation2019) argue that news organisations could be liable if an algorithm publishes defamatory content, it does so within the framework of private enterprise. There is a further need to examine the requisite infrastructure and policies (Pickard Citation2020) and explore the intersection of journalism, law, and technology, especially in places beyond the US and Europe. Our study contributes to the literature by inspecting copyright law’s impact on journalism innovation in the, so far, unexamined Chinese context, where AI development is state-led and the media system is state-controlled.

We move beyond the adaptation of AI in the newsroom, and investigate how the Chinese Copyright LawFootnote1, and in particular the Third Amendment, which came into effect on the June 1, 2021 after a decade in revision, discursively constructs AI. We study how different institutions (government, journalism, and tech companies) exert their power in policymaking and create structural changes in the media system. By analysing law texts, court cases, media reports, and other supporting documents, our case shows that the rhetorical momentum in news media and policymaking of AI are more sensitive to AI’s institutional and social implications and the formation of its normative framework. We argue that the policy frameworks used to regulate AI-powered innovations could highlight how news organisations understand and appropriate AI and their motivations and even more importantly, the legal boundaries in which they can use AI. In the context of journalism, AI policy matters. China’s case shows how a strong government can use policy as an instrument to promote tech giants while eroding journalistic autonomy.

Theory: Adopting, Shaping, and Regulating AI in News Media

The majority of scholarly work on AI in journalism (Broussard et al. Citation2019; Diakopoulos Citation2019; Marconi Citation2020) has had an implicit slant of technological determinism entrenched on whether “weak AI” or “strong AI” (Russell and Norvig Citation2003) is achievable in journalism and how it could impact the industry and its workers. Most studies (Caswell and Dörr Citation2018; Thurman, Dörr, and Kunert Citation2017; Graefe Citation2016) adopt a “weak AI” approach, assuming AI is and will remain a computational render of human cognitive function. Hence, to understand how AI can impact journalism and its relation to other institutions, we need to investigate how it is formulated in the regulatory framework and the discourse around it. Regulatory documents are important in their own right, but we can discover more about the ideas and actors behind them and their actual implementation once they are discursively constructed in public (e.g. in presidential addresses, court rulings, public debates, etc.) (Bernisson Citation2021). Moreover, we argue that this regulatory and discursive amalgamation not only plays a major role in shaping how media institutions innovate and adopt AI but also in (re)establishing the normative dimensions of AI across the field of journalism and the autonomy of the journalistic institution since AI innovation has a bearing on both the daily routines of journalistic work and the institution’s ability to maintain its long-term trajectory (Örnebring and Karlsson, Citation2022).

Departing from neo-institutional theory, our theoretical framework recognises that a) AI can be considered an innovation, and as such, it is shaped by a set of agents; b) these agents attempt to use their power to imprint their institutional logics in the adoption of AI; and c) the interdependence between governments, business, and media institutions create policy networks that negotiate beliefs, interests, and normative dimensions of AI in news media.

The Agents of AI-Enabled Innovations

The complexity and the mathematical and engineering underpinnings of AI have served for years as the boundary markers of who can push forward the future of the technology (Negnevitsky Citation2005). As AI becomes more present in everyday processes, however, the relevance of its technical origins wanes in favour of field-specific applications, its practice, and implications (Hansen et al. Citation2017). More concretely, the importance of AI in most fields and industries resides in what Paschen, Pitt, and Kietzmann (Citation2020, 151) call “AI-enabled innovations and their potential effects on two dimensions: the innovations’ boundaries and their effects on organisational competencies”. In other words, the competitive value of AI for organisations is AI’s capacity to accelerate value-creating (and subsequent value destruction of previous innovations such as, suggestively, the “nose for news”) innovations that change their products or processes to improve their competitiveness.

For the purposes of this article, we move beyond the organisational roots of innovations by adopting the Lewis and Westlund heuristic of the Four A’s (Westlund and Lewis Citation2014; Lewis and Westlund Citation2015). Assuming this socio-technical approach allows us to expand our lens by taking into account the actors, actants, audiences, and activities as active agents of innovation in media organisations. The original framework considers actors as the “three social groups ... most relevant to the news media organisation: journalists, technologists, and business people” (Westlund and Lewis Citation2014, 18), but also includes the “roles of other actors, within and beyond the news organisation” (Lewis and Westlund Citation2015, 23). This is important because, in a media system as complex as the Chinese, it is crucial to acknowledge the role of actors situated at the policy network level, such as lawmakers and government officials.

While believing that AI, as a media innovation, tends to render the technology as a mere product, considering it an actant (e.g. non-human actors) helps account for the cultural norms and practices of the human actors that put them in practice and the policy frameworks that regulate them. In studying media innovation, actants may help “clarify the particular role of technologies in structuring cross-media news work, by virtue of their affordances and networked interactions with other actors, actants, and audiences” (Westlund and Lewis Citation2014, 20). The role of audiences as recipients, commodities, and active participants continues to increase in the news media decision-making process. As news organisations continue to use AI and automated approaches to learn about news audiences (Nelson Citation2021), audience-orientation keeps growing and being institutionalised in news organisations (Ferrer-Conill and Tandoc Citation2018). Finally, “activities of media innovation also include emerging and sporadic efforts, relating both to the invention and implementation of an innovation” (Westlund and Lewis Citation2014, 22). The four A’s framework organises the activities according to the leading agent (actor-led, actant-led, audience-led, or a combination of them). These activities are routinised practices and “patterns of action through which an organisation’s institutional logic is made manifest through media” (Lewis and Westlund Citation2015, 28), operationalising their strategic ambitions through specific innovations. Considering this, we ask the following question:

RQ1: What are the leading agents of AI innovation according to the Chinese Copyright Law?

Institutional Boundaries and Logics of AI

After identifying the agents of AI innovation in journalism, it is crucial to examine the logics these agents carry for the next step. To do this, we find neo-institutional theory conducive. The institutional foundations of journalism as an institution rest in the fact that journalism is “an organisationally bound enterprise with routinised practices, subject to varying factors and forces in the environment” and as a “meso-level collective field, shaped by external forces but also capable of agency within a collective space that has negotiated boundaries, legitimacy, and an internal logic” (Lowrey Citation2018, 125). However, studies of journalism innovation often overlook the fragile interdependencies across institutional boundaries. Due to external political pressure, regulatory enforcement, technology imperative, or organisational uncertainty, organisations succumb to institutional isomorphism and slowly become similar within their institutional boundaries (DiMaggio and Powell Citation1983).

A neo-institutional approach proposes that the pressure for innovation becomes a matter of routine. The important aspect is that, in an attempt to exert institutional differentiation through innovative processes, the forces of isomorphism push for the standardisation and homogeneity of the innovation (Czarniawska-Joerges and Sevón Citation1996). This suggests that the global push for AI as a transformational innovation imprints the need to incorporate AI solutions into news media, including the logics from neighbouring institutions that either stimulate or hinder these innovations. These institutional logics embody “the socially constructed, historical pattern of material practices, assumptions, values, beliefs, and rules by which individuals produce and reproduce their material subsistence, organise time and space, and provide meaning to their social reality” (Thornton and Ocasio Citation1999, 804). Thus, institutional logics comprise structural, normative, and symbolic institutional dimensions in which individual agency, cognition, socially constructed institutional practices, and rule structures converge (Thornton and Ocasio Citation2008). In other words, institutional logics “provide both opportunities and constraints for individuals and organisations” (Thornton, Ocasio, and Lounsbury Citation2012, 78) to create and negotiate their understanding of journalism.

The institution of journalism carries various overlapping and conflicting logics. The professional, commercial, technological, and cultural logics of journalism often coexist in tension, shaping journalistic norms, values, and practices (Deuze Citation2009). Thus, within the domains of institutional news production, which acknowledges internal and external logics that interact when boundaries across different institutional orders are transgressed, we need to encompass the institutional logics of AI as an innovation as well as the institutional logics of Chinese political, technological, commercial and media organisations. The former is often shaped by the logic of the technological determinism (Campolo and Crawford Citation2020) and the logic of the algorithmic and automated decision-making (Araujo et al. Citation2020). As for the latter, the state-owned enterprises restructuration and the political-institutional logic of the Chinese bureaucracy generated organisational paradoxes such as “the uniformity in policymaking and flexibility in implementation, incentive intensity and goal displacement, bureaucratic impersonality and the personalisation of administrative ties” (Zhou Citation2010, 47). Genin, Tan, and Song (Citation2021) call this the institutional logic dissonance of Chinese bureaucracy, positing that the formula of state-owned enterprises hinders organisational change for technological innovation. As these logics interact in a complicated array of interdependent strategic resources, it is important to understand the centralised nature of news media in China and how policies may steer innovation as a strategic resource between the state and other institutions. To pave the way to set our analysis in a broader context later, this study, asks:

RQ2: Which institutional orders and logics are being promoted in the Chinese Copyright Law?

AI as a Strategic Resource and Policy Network Theory

After establishing the primary agents and logics, we continue to examine how the agents interact, how the orders and logics materialise in practices, and the implications of this materialisation. In any society, a number of institutions strive for their long-term survival, societal legitimacy, and an impact on their environment (e.g. on other institutions). The better the institutions are at this, the greater their autonomy. A very successful institution is, thus, able to inscribe its values, routines, and ways of doing and seeing things into laws and dominate public discourse. At the same time, establishing institutional values becomes “a way of demonstrating organisational legitimacy through copying other organisations (mimetic isomorphism), or is legislated because of that societal legitimacy (coercive legitimacy) or is diffused as the appropriate professional standard (normative legitimacy)” (Hinings, Gegenhuber, and Greenwood Citation2018, 53). Hence, one institution’s struggle is also every other institution’s struggle. An helpful theory for understanding the competition and collaboration between institutions is Policy Network Theory (PNT). In short, the theoretical propositions of PNT can be summarised as follows: Policy networks consist of formal and informal linkages between governmental and other institutions. The entry and level of access to these networks are unequal and contingent upon resources. The institutions in the network share an interest in shaping policymaking and its implementation. Hence, the institutions are interdependent on each other’s resources (political, legal, knowledge, financial, symbolic, etc.) and, therefore, the policies devised in the network are the result of a “bargaining game”. The desired outcome of the “game” for an institution is to influence the actions of other institutions and align them with appropriating the institution’s own goals and problem-solving (Lu, de Jong, and ten Heuvelhof Citation2018; Rhodes Citation2007; Rhodes and Marsh Citation1992; Richardson Citation2006) in a continuous process of differentiation and isomorphism.

The case of China is unique due to the historical political entanglements, but the mechanisms explained by PNT are still helpful for understanding and elucidating policy processes in China (Zheng, De Jong, and Koppenjan Citation2010). A political system with strong and centralised power does not necessarily mean an absence of autonomy for other institutions but that the operating logics of the network diverge. For instance, in contrast to a democratic system, open market competition might be less relevant than personal networks in an authoritarian system. However, pathways exist in both systems. Furthermore, the state and the Chinese Communist Party (CCP) comprise various and conflicting goals. Thus, it opens a window of opportunity for institutions to shape public policy to set the stage for different outcomes.

The outcome of the policy network is always partly open. Even within a system in which the government is the most powerful institution, any innovation can, and presumably will be, used by institutional actors to increase their autonomy and influence other institutions. Hence, any innovation can be studied through the lens of how its definition and implementation further or hinder institutions’ long-term autonomy. Moreover, regardless of the strength, position, resources, and level of planning of various institutions, there is always an element of unintended and unexpected outcomes in the policy process (Zheng, De Jong, and Koppenjan Citation2010). In other words, to some extent, the outcome is always erratic. The policy network becomes more receptive to new input (i.e. from non-established institutions) if the partaking institutions face uncertainty, as is the case with AI (Richardson Citation2006). As the “game” becomes more open for new players, the outcome is less predictable when truly revolutionary innovations arrive. Thus, it is necessary to study the outcome of the policy network (and not only the players entering the game and their strength relative to each other) both in the actual regulation as well as in its implementation, discourse, and compliance of the policy work across institutions, since this is the prized outcome of the network.

AI is a potentially disruptive innovation that ties into several core dimensions of society and its institutions. For journalism, it concerns authorship and ownership in the context of copyright, but also skills, routines, and norms. Thus, how AI is defined and regulated, understood in public discourse, and sanctioned by courts is both the outcome of the policy network and key explanatory factors for the future development of institutions, including, but not limited to, journalism.

In short, how AI is understood and regulated is an issue with far-reaching consequences for the institution of journalism. Accordingly, this article asks the following analytical question:

RQ3: What are the implications of the Chinese Copyright Law for journalism’s relationship with neighbouring institutions?

Empirical Background, Method and Data

This case study investigates how Chinese Copyright Law relates to AI innovation in Chinese newsrooms. A case study is a suitable empirical method to examine contemporary real-world circumstances and address “how” or “why” inquiries in concrete social phenomena (Yin Citation2018), which fits the purpose of our research. We choose the case of Chinese Copyright Law because it regulates both AI innovation and journalistic practices, hence the law concerns both journalistic and its neighbouring institutions. Worldwide, copyright law provides exclusive rights to authors to protect their work but also aims to encourage learning, creativity, and innovation. In the face of developing AI, up to this point, no legal reform has been practised to include provisions to address specific situations. Scholars have discussed different legal regimes’ possible treatments of AI-generated works (Abbott Citation2020). For example, while the UK Copyright, Designs and Patents Act 1988 has a provision protecting computer-generated works, it remains problematic in light of future developments in AI regarding its practical application and definition of concepts such as “originality” (Bond and Blair Citation2019). The global debate on how copyright law can or should evolve to accommodate some emerging issues brought by AI also includes topics such as the copyrightability of works generated by AI systems (Pearlman Citation2017); reconceptualisations of originality and creativity (Bridy Citation2012); copyright restrictions on data used to “train” AI (Rosati Citation2019); if AI should be given legal personhood to be deemed an author (van den Hoven van Genderen Citation2018); if not an AI, who should be the author of AI-generated works and who owns the copyright (Brown Citation2019); and who should be held accountable if something goes wrong (Yanisky-Ravid Citation2017). Established 30 years ago, the Chinese Copyright Law is particularly interesting as some existing rules of the Chinese Copyright Law offer avenues for AI-generated works to be copyright-protected (see Ye and Adcock Citation2021; He Citation2019; and Yu Citation2022 for a more detailed background on the Chinese copyright regime).

Studying Chinese newsrooms’ AI innovation opens a window to look inside the shifting power dynamics between the strong institutional orders in a complex media system. Since the Open and Reform in the late 1970s, Chinese media have undergone decades of commercialisation, conglomeration, and convergence (Meng Citation2018; Stockmann Citation2013; Zhao Citation2008). As a result, media ownership has evolved into a press system comprising official media and commercial media. With increased networked connectivity, news operations have become a popular business at China’s tech companies, effectively monetising information as they not only have control over the infrastructure but also the resources for content production (Meng Citation2018). The decentralised media space has also prompted the evolution of a decentralised regulatory model that combines self-censorship, market-based differentiation, and government policy (Zhao Citation2008). However, a state-directed marketisation and commercialisation of media did not lead to a free press. Nevertheless, they created a media system where “[A]s long as the state can walk the fine line between selectively opening and closing space for news reporting while ensuring a roughly one-sided flow of information, state and market forces can mutually reinforce each other” (Stockmann Citation2013, 36).

The role of AI in Chinese newsrooms is becoming more and more prominent. Platform-based news aggregators have created huge business opportunities in China (Kuai et al. Citation2022) by adopting a profit-sharing model with content producers to “overcome the legal issues and to attract more quality content” while at the same time maintaining control over algorithms that determine content exposures that highlight the “power imbalance between platforms and content producers” (Zhang Citation2019, 629). Meanwhile, platforms can use their recommendation algorithms to favour party-related news by using big data, AI, bots and other technologies to help their government clients “eliminate anti-government information and boost pro-government opinion” (Hou Citation2020, 2239). This also illustrates that “algorithms are also effective in (re)producing ideology, which has thus accommodated the algorithmic logic and the platformization of content production” (Meng Citation2021, 13) and that they are dependent on the institutional allegiance of those who create them. This means that situating the study in the Chinese context contributes to a more fundamental theoretical engagement with PNT from the perspective of a non-democratic state, questioning the traditional norms and values of journalism outside the established democratic traditions of North America and Europe, thus correcting the epistemological imbalance in the field (Tandoc et al. Citation2020). As the Third Amendment of the Chinese Copyright Law came into effect on the June 1, 2021, our article provides a timely perspective on the recently revised law, highlighting what has (not) been changed and how were the decisions justified.

As for empirical material, our data comprise multiple sources of evidence, which we triangulated on three levels: first, primary data, as in the different editions of Chinese Copyright Law and two exemplary court cases that involve copyright regulation on AI innovation in Chinese media, namely Beijing Film Law Firm v. Baidu (2018)Footnote2 and Tencent v. Yingxun Tech (2019)Footnote3; second, contextual material, such as Chinese Supreme People’s Court’s guidelines in law implementation, presidential addresses (under China’s authoritarian rules, a president’s remarks are treated more like an order than a comment), other relevant policies, regulations and official statements; third, discursive material, as in news articles and industry reports that address copyright law, AI innovation, and Chinese media. In analysing the data, we stick to the principles suggested by Yin (Citation2018) to rely on our theoretical propositions and establish a chain of evidence. Our qualitative document analysis involved an iterative process of skimming (superficial examination), reading (thorough examination), and interpretating (Bowen Citation2009) and did not treat the records as firm evidence but examined them for what they are and their intended goals. While the data triangulation gave us insights into the matter in the context and historical background and enabled us to track changes and development, a more hermeneutic inquiry shed light on the motivation, indicating the conditions that impinge upon the phenomena currently under investigation.

Findings and Discussion

The following three sections correspond with the research questions and present related results and analyses.

Chinese Copyright Law Covers Actor-Led AI-Generated Work

To foster innovation and promote creativity, Copyright Law serves the dual role of protecting authors and their works but also encourages fair use and learning. By doing so, Chinese regulators play an essential role in establishing who is protected when pushing forward AI innovation. While China has set the goal to become a global AI innovation hub by 2030 (Chinese State Council Citation2017), having the legal infrastructure to encourage such innovation is crucial. The role of copyright law in facilitating AI innovation is explicitly expressed by the Supreme People’s Court (2020). In the document accompanying the latest revision of the Copyright Law, the highest trial organ in China, says:

[G]reat importance needs to be attached to the new demands raised by the technological development in areas of the Internet, AI and big data … to accurately define the types of works to promote the development of the emerging industries (Chinese Supreme People’s Court Citation2020, italics added).

While the development of AI has challenged legal scholars worldwide to debate the changes needed in laws to adapt to new realities, the Chinese Copyright Law has already provided a possible approach to protecting AI-generated works. Since its inception 30 years ago, Chinese Copyright Law has been designed as a hybrid of civil law and common law traditions and principles (He Citation2019). The law recognises moral rights and establishes the actual author as the initial copyright owner in the situation of works made for hire (a setting from the authors’ right system). It acknowledges that legal persons and entities can be entitled to copyright ownership (a setting from the copyright system), meaning non-human authors such as companies and organisations can be protected. The Third Amendment, which involved a revision process lasting more than ten years and finally came into effect on June 1, 2021, was primarily motivated by the need to adapt to technological advancement, in particular, the ongoing changes in forms of creation, ways of disseminating works, and copyright transaction models. This perspective suggests that Chinese regulators consider AI innovation activities to be actor-led, recognising human expertise and working routines as the guiding factor of the media innovation (Westlund and Lewis Citation2014).

In defining the term “works”, the newly revised Article 3 stipulates that “works” comprises “intellectual achievements in the fields of literature, art and science, which are original and can be expressed in a certain form”. Most importantly, Item 9 of Article 3 takes an even more inclusive approach and states “works” also includes “other intellectual achievements conforming to the characteristics of the works”. Such an all-purpose miscellaneous provision is a characteristic design and legislative technique in the Chinese Copyright Law to cope with unforeseen developments, such as the emergence of a new type of work, including AI-generated works. This means that works created by AI are still protected, implying that actant-led innovation is recognised under the Copyright Law. While AI is still socially constructed, it is instructed to create content based on algorithmic processes previously performed by human actors (Westlund and Lewis Citation2014). Since “copyright” includes “any other rights a copyright owner is entitled to enjoy” (Article 10, Item 17), a safeguard is put in place to maintain the relative stability of the law so that it does not need to change whenever a new type of works emerges but also leaves some discretionary power to the judges in judicial practices.

This means that when it comes to AI-generated works, Chinese Copyright Law can assign copyright ownership to the investor, developer, or even the user of the AI system to protect incentives for AI innovation. This does not require specifying the author or naming the machine as the author so as to not break the anthropocentrism principle of the copyright law. In the case of automated journalism, under the Chinese legal framework, automatically generated news content can be copyright-protected and the companies that develop the AI system or use the AI system to generate automated news content, hence, are also protected, as illustrated in the Tencent v. Yingxun (2019) and Film v. Baidu (2018) cases.

In both cases, the courts looked into the AI-generated process in detail. In the Tencent v. Yingxun case, Tencent sued Yingxun for the unauthorised reposting of an article generated by Tencent's newswriting bot, Dreamwriter. The verdict stated: “In this case, the arrangement and selection of data input, trigger condition setting, template and corpus style set by the plaintiff’s creative team are intellectual activities that are directly related to the specific manifestation of the article involved”. The court thus confirmed the work’s originality and creativity and its copyrightability. The court went on to say “the involved article is hosted by the plaintiff, and finished by the plaintiff’s creative team, including the editing team, product team, and technology development team, using Dreamwriter software” and “the court determines that the involved article is the work of a legal entity created by the plaintiff, and the that the plaintiff is a qualified subject in this case and has the right to initiate a civil action for infringement.” The court then ordered the defendant to compensate the plaintiff’s economic losses and reasonable expenses of 1,500 Chinese Yuan (RMB).

Similarly, in the Film v. Baidu case, the court held that the text automatically generated by computers also “reflects a certain level of originality”. In this case, Beijing Film Law Firm first published an article with parts of AI-generated texts and graphs on its official WeChat account. It sued Baidu for publishing the involved article and altering the content without permission. The court confirmed the article as a legal entity’s work created by Film and affirmed the law firm as the proper subject to file the lawsuit. The court stated, “the plaintiff as the software user has invested through paid use of the AI program and generated the article using their own keywords … the software user should be vested with the relevant rights and interests of the article to encourage use and communication of the article”. The court then ordered the defendant to compensate the plaintiff for their economic losses and reasonable expenses of RMB 1,560. Ascribing originality to AI-generated works while at the same time recognising human role implies that the strategic choices in media innovation activities, as prescribed in the Chinese Copyright Law is, in fact, actor/actant-led. This hybrid approach establishes an environment in which actors and technological actants mutually shape and manage routine content production and subsequent innovation (Lewis and Westlund Citation2015).

While the theoretical debate of originality and whether machines can be creative is beyond the scope of this article, what we want to point out is that devoid of its own originality theory, when dealing with copyright cases, Chinese courts were concerned with not only who created the work, but also who has invested in the creation of such work. When responding to RQ1 (What are the leading agents of AI innovation according to the Chinese Copyright Law?), the cases illustrate that China’s automated journalism is essentially an actor/actant-led media innovation. On the one hand, actors such as journalists and technologists are embracing innovation. On the other hand, the non-human actant AI accelerates the process of content creation and value generation (Paschen, Pitt, and Kietzmann Citation2020). The only apparent deviation from Lewis and Westlund’s heuristic is that they consider the actor/actant approach an “equal collaboration.” Our findings show that Chinese Copyright Law favours actors over actants. In that sense, actors retain the copyright of the actants they create and, at the same time, retain the copyright of the works their actants create, turning coding, its content, and their copyrights into assets. In other words, those with the expertise and funds to invest in AI can monetise the software and the content it generates.

Chinese Copyright Law Protects Investors but Diminishes Journalists

The way in which authorship and ownership are separated in the Chinese Copyright Law creates an opening for non-human entities such as corporate enterprises to be regarded as authors, highlighting that “existing regulations have gradually begun to attach more importance to protecting the interests of investors” (Ye and Adcock Citation2021, 166). The Chinese courts have also shown a tendency to protect investments in AI innovation in the above-mentioned cases. In fact, not only do the ownership rules favour investors but the new Chinese Copyright Law has also been revised to increase protection for copyright owners while intensifying punishment for copyright infringement. We can see this in four distinct instances. First, a newly added Article 12 creates a rebuttable presumption of authorship for the individual or entity whose name has been attributed to the work and who has corresponding rights in that work. It also stipulates copyright owners can register their works with registration authorities. Such a measure encourages copyright registration and provides means for copyright owners to protect their rights in case of copyright infringement which would help them solidify their rights. Second, Article 49 introduces provisions related to technological protection measures, a set of technologies used to protect copyright and rights related to copyrights. The Law thus authorises the use of such measures and prohibits others from circumventing or destroying the measures or from intentionally providing others with technical services to do so. Such provisions existed as regulations before the amendment. Incorporating the regulations into the Law and elevating their legal status shows the legal regime’s determination to protect technological innovation and the Law’s tendency to promote technological institutional logics. Third, Article 54 increases the maximum compensation for copyright infringement from RMB 500,000 to RMB 5 million. A new minimum compensation has also been set at RMB 500. Lastly, the brand-new Article 55 grants new investigative powers to the authorities, reinforcing their legitimate role in copyright regulations. Additionally, the Supreme People’s Court (2020) stated that the courts should improve the quality and efficiency of hearing copyright-related cases and allow relevant parties to store evidence using blockchain technology as means of proof. These changes in the Chinese Copyright Law primarily protect technological assets that can yield revenue. This highlights the trend of promoting and strengthening the commercial and technological institutional logics in the media sector over professional logics that would emphasise informing the public (Deuze Citation2009).

The emphasis on protecting investors is accompanied by changes detrimental to journalists. While, for the first time, the law articulates its copyright protections to news content, the beneficiaries seem to be the organisations rather than individual journalists. The law replaces the term “news on current affairs” with the precise wording of “purely factual information” in Article 5 Item 2.In other words, “purely factual information” cannot be copyrighted but representations of those facts, news commentary, and other news-adjacent content can, provided it satisfies the originality requirement. That is to say, copyright protection will extend to AI-generated works which are “news” and are not simply “purely factual information”, which is in line with our previous finding that the new Chinese Copyright Law favours actors over actants.

In addition, a newly added Item 2 to Article 18 extends to include “employment works by employees at newspapers, periodicals, news agencies, radio and TV stations” within the category of “special work for hire”. Prior to the law change, employees at these news organisations held the copyright to their “work for hire”, and their employers had priority for using their works. After the law change, the copyright of the journalists’ work goes to their employers, and the journalists are entitled only to authorship. Labelling journalists’ work as “special work for hire” places their works in the same category as engineering project design, drawings of product design, maps, and computer software. The common denominator is that employees require the infrastructure and resources employers provide to create such works. This is to say the new law has taken the approach to favour employers over employees and, in our case, the news organisations over individual journalists. Such a tendency to favour institutions rather than individuals is also shown by the discarding of more employee-friendly proposals, which proposed to give employees more freedom of contract in work-for-hire situations during the amendment process. While the provision might hurt journalists’ motivation to create, the more serious danger lies with those journalists might not have the freedom to publish their own works elsewhere without the authorisation of their employers. If a journalist decides to publish an article rejected by their editors to their own social media account, such an act could constitute a copyright violation under the new law. Freedom of speech of journalists employed at news organisations is therefore “legitimately” taken away by the new law. By favouring organisations rather than individual journalists, the new law erodes journalists’ autonomy and impacts Chinese journalism’s professional norms via the route of the normative isomorphism (DiMaggio and Powell Citation1983).

In answering RQ2 (Which are the institutional orders and logics being promoted in the Chinese Copyright Law?), the new law’s predilection towards investors and organisations has promoted commercial logic, technological logic and political logic over the professional logic of journalism. It is worth noting that while the Third Amendment potentially affects journalists in terms of whether copyright protection is granted to AI-created works or not, it is the primacy of the commercial, technological, and political logics over the journalistic ones that puts journalists at a disadvantage vis à vis AI. Decreasing autonomy has been identified as “a major disincentive to continuing a journalistic career” (Meng Citation2018, 82) by Chinese journalists, especially when tech companies are luring them with higher salaries. The changes alter the dynamics of newswork and, thereby, impinge on its key beliefs, norms, practices, and rules. Since an institution is made up of these beliefs, norms, practices, and rules, as soon as the law reconfigures the normative framework of the institution, the institution must change from within (Vos Citation2019; Thornton, Ocasio, and Lounsbury Citation2012). Powered by the development of AI, such journalism innovation is going to play a more significant role in the reconfiguration of the distribution of communication, pushing the institution of journalism—at least in the Chinese context—on a path of deinstitutionalisation and further weakened autonomy. Thus, the journalistic institution is less likely to be successful in determining how journalists carry out their daily work (Örnebring and Karlsson, Citation2022). The neighbouring institutions have pushed journalism to innovate, but they have also pushed the institution of journalism toward institutional isomorphism.

Chinese Copyright Law Weakens the Journalistic Institution

In addition to having jurisdiction over how journalists carry out their everyday work, the long-term autonomy of the institution is contingent upon articulating, setting up and following institutional goals, such as public enlightenment (Örnebring and Karlsson, Citation2022). The ability to do so depends on the institution’s relationships with other institutions. Thus, our third and final research question asked: What are the implications of the Chinese Copyright Law for journalism’s relationship with neighbouring institutions? While it is still too early to see the full consequences of the new Copyright Law for the journalistic institution, there are some indications that it has been weakened while other institutions have been strengthened.

First, the most prominent Chinese tech giants, Alibaba, Tencent, and Baidu have developed their own news-writing bots and other journalism-oriented AI-powered tools. However, not only are tech companies populating the information space using AI-generated articles, they are also busy collaborating with the state to occupy more avenues in that space. For example, Alibaba collaborated with the Publicity Department of CCP and developed CCP’s own ideological conditional app, Xuexi Qiangguo, realising “the platformization of propaganda” (Liang, Chen, and Zhao Citation2021). Thus, tech companies can enter journalism’s domain, subsequently shrinking journalism’s legitimate authority over its own domain. From a PNT perspective, this indicates that the journalistic institution is on the losing side of the “game” since legitimate authority is intricately linked with an institution’s long-term viability (Rhodes Citation2007).

Second, when the state issues directives, tech companies ultimately implement control of the creation and flow of information on their platforms (Ruan et al. Citation2021). Tech companies like Tencent and ByteDance adopt machine learning technologies to filter “inappropriate content”, working alongside human censors. As copyright includes the right to distribute, whoever owns the copyright has the upper hand in distribution. As we have established, the Copyright Law has been revised to favour commercial logic. Thus, not only do other institutions intrude on journalism’s domain by competing with news content, but journalism’s own distribution is conditioned by these neighbouring institutions. It serves as another indication of journalism’s disadvantage in the policy network.

Third, state media’s AI innovation has always partnered with some tech companies. Xinhua News Agency collaborated with Alibaba in introducing the “Media Brain”; China Media Group is in collaboration with Tencent, SenseTime, and other tech companies to develop what it calls a “5G + 4K/8K + AI” strategy; People’s Daily founded its AI Institute in partnership with Lenovo and iFlytek and founded its AI Media Lab in partnership with Baidu. Moreover, with the new law including news content under copyright protection, there might be a shift in the news aggregator business where the relationship between the aggregators and the content providers is formalised. In the short term, this is a gain for the institution of journalism as the new law forces aggregators to acknowledge their dependency on journalism. However, viewed from a longer perspective, this indicates how the journalistic institution must cater to external actors that were not relevant before.

On the one hand, the more or less coerced partnership with tech companies also means that journalism has to, at least partially, adapt to tools and work process shaped by external actors. This AI is a new tool in the newsroom, and it (or its proprietor) holds the copyright of the works made. On the other hand, tech companies capitalise on journalism’s legitimacy, which confers credibility to the tech firms and makes their innovation credible. Moreover, the state has its own plan for journalism. When Chinese President and Communist Party General Secretary Xi Jinping explicitly told a group of media professionals that “we have to explore the application of artificial intelligence in newsgathering, production, distribution, reception and feedback, to control algorithms with mainstream value orientation and comprehensively improve our ability to guide public opinion” (Xi Citation2019), he actively outlined the state’s goals of extracting value from AI in news media. This top-down, state-directed digital journalism innovation in the newsrooms has been met with a lack of enthusiasm as journalists “considered themselves as followers rather than as initiators or architects of change” (Fang and Repnikova Citation2021, 14).

With the Chinese Copyright Law affirming the copyrightability of AI-generated content, tech companies might make significant gains in this new market and could be winners in the policy network as is often implied in analyses of Western media systems (for instance, the power and responsibility of spreading misinformation lie with platforms). However, it is important to note that the development of tech giants domestically has been met with increasing state control. Recently, the state has stepped up its efforts to rein in the tech giants’ power. On top of an Antitrust Law that came into force in 2008, specifically targeting tech giants, the State Council issued a Guideline on Antitrust in Platform Economy in February, 2021. Additionally, China has been aggressively producing laws regulating the digital space, such as the Data Security Law, which came into effect on September 1, 2021, and the Personal Information Protection Law, which came into effect on November 11, 2021, all of which could be detrimental to data-hungry tech companies. In March, 2022, a new set of rules titled Internet Information Service Algorithmic Recommendations Regulation came into effect, making China the first in the world to institute this kind of policy to regulate algorithms. Thus, it seems that the state has the upper hand in the policy network. Consequently, the state is the actor most able to influence the actions of other institutions and align them to work towards its key goals and problem-solving.

Conclusions

Studying closely the Chinese Copyright Law and copyright disputes court cases involving AI in journalism, we conclude that: a) While human actors predominantly drive AI innovations, the law provides room for actant-led innovation and content production. This is significant because it recognises a degree of autonomous creativity and originality deriving from the AI and human-machine collaboration; b) The law institutionalises and separates authorship and ownership of AI-generated works. By doing so, AI-generated works become protected by the law, which means that AI is transformed into wealth-generating assets that favour tech companies who have the infrastructure to build AI and own the copyright of the works AI generate; c) The law prioritises and reinforces the position of the state, investors and the tech industry (at the detriment of journalism) as key actors in the policy network. Taken together, the case study illustrates that through AI innovation and regulation, journalism’s long-term autonomy is under threat. Thus, AI serves as a catalyst for the deinstitutionalisation of journalism in China by widening the power imbalance between journalism on the one hand and the state and tech companies on the other; d) there are commonalities between previous research in the West and this case study. Tech companies are imperative in the innovation and development of AI in both contexts, even if their names differ (Google, Apple, Meta, and Amazon in the West and Tencent, Alibaba, Baidu, and ByteDance in China). Likewise, the state’s actions are equally imperative, whether it means a lesser (in the West) or a more prominent (in China) role since its logics and behaviour affect the other institutions.

Besides offering empirical evidence rooted in an overlooked setting, this study provides two overarching contributions. We consider them in relation to the fact that our field (both scholarship and practice) primarily approaches AI in a newsroom setting from a “weak AI” perspective and through a Western lens. This entails that the intersection of AI and journalism has been primarily understood in terms of how rank-and-file journalists in Western democracies use AI to carry out their daily work. These approaches focus on the individualism of journalists and how they adopt AI in the newsroom.

Against this background, the study’s first contribution shows that important aspects of how AI is defined, introduced, and constrained to the institution of journalism are negotiated by the policy network way beyond and before it appears in the newsroom. Examining the institutional logics that shape AI adoption within copyright regimes may have higher explanatory power than assessing just the individuals in the newsroom. Hence, studying external actors and how they shape AI is essential.

The second contribution points to the importance of considering the specific cultural and societal setting in inter-institutional negotiation. AI in the newsroom is defined by the institutions (i.e. state, tech corporations, etc.) who help shape AI and have stronger positions. In the Chinese context, the state is a more prominent institution compared to the Western context. However, as China has the ambition to become a global AI superpower, this entails that the “local” decisions and actions described in this article might eventually trickle out and affect newsrooms worldwide. The local specificity of a media system clashes with the universal drive to harness AI innovations by news organisations. With the AI-powered machine translation (D. Zhang et al. Citation2021), the Chinese state’s ongoing efforts to occupy space in the international discourse (Cook Citation2020) might be redoubled.

Our study has several limitations. First, our analysis focused on the reframing of one law and could not accommodate more laws that could have been conducive to our assessment, such as antitrust and data protection laws. Second, our approach relied only on documents and could have benefited from accounts from policymakers. Due to the highly political nature of policymaking in China, however, access to policymakers is particularly sensitive. Future studies could focus on media managers and journalists to capture the perceived effect that the new Copyright Law has had on their practice and their normative views on the use of AI innovations in newsrooms. Future studies can also conduct comparative analyses between the Chinese context and other more market-oriented media contexts. Finally, legal regimes and laws evolve and adapt to new realities, and we need to pay close attention to who’s shaping the legal regimes, whose interests are codified into laws, and how governments enforce those laws. Otherwise, the rule of law would collapse when trust in laws erodes.

Acknowledgements

The authors would like to thank the anonymous reviewers and the special issue editors for their valuable feedback that helped to improve this article. We would like to extend our appreciation to the participants of the special issue workshop and the colleagues at Karlstad University for their constructive comments on the earlier drafts of this article. We would also like to express our gratitude to Kuang Heng and João Gonçalves de Assunção for providing their legal expertise and insightful critiques.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research is supported by the Anne Marie och Gustav Anders Stiftelse för mediaforskning.

Notes

1 Copyright Law of the People's Republic of China (promulgated by the Standing Committee of the National People’s Congress of the People’s Republic of China September 7, 1990, amended November 11, 2020, effective June 1, 2021), http://www.npc.gov.cn/englishnpc/c23934/202109/ae0f0804894b4f71949016957eec45a3.shtml.

2 Beijing Film Law Firm v. Baidu, Beijing Internet Court, Jing 0491 Min Chu. No. 239, 2018

3 Tencent v. Yingxun Tech, People’s Court of Nanshan, Yue 0305 Min Chu. No. 14010, 2019

References

  • Abbott, Ryan. 2020. “Artificial Intelligence, Big Data and Intellectual Property: Protecting Computer-Generated Works in the United Kingdom.” In Research Handbook on Intellectual Property and Digital Technologies, edited by Tanya Aplin, 322–337. Cheltenham, UK: Edward Elgar Publishing. https://doi.org/10.4337/9781785368349.00023.
  • Araujo, Theo, Natali Helberger, Sanne Kruikemeier, and Claes H. de Vreese. 2020. “In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence.” AI & Society 35 (3): 611–623. https://doi.org/10.1007/s00146-019-00931-w.
  • Bernisson, Maud. 2021. The Public Interest in the Data Society: Deconstructing the Policy Network Imaginary of the GDPR. Karlstad, Sweden: Karlstad University Press.
  • Bond, Toby, and Sarah Blair. 2019. “Artificial Intelligence & Copyright: Section 9(3) or Authorship without an Author.” Journal of Intellectual Property Law & Practice 14 (6): 423–423. https://doi.org/10.1093/jiplp/jpz056.
  • Bowen, Glenn A. 2009. “Document Analysis as a Qualitative Research Method.” Qualitative Research Journal 9 (2): 27–40. https://doi.org/10.3316/QRJ0902027.
  • Bridy, Annemarie. 2012. “Coding Creativity: Copyright and the Artificially Intelligent Author.” Stanford Technology Law Review 2012: 5–28.
  • Broussard, Meredith, Nicholas Diakopoulos, Andrea L. Guzman, Rediet Abebe, Michel Dupagne, and Ching-Hua Chuan. 2019. “Artificial Intelligence and Journalism.” Journalism & Mass Communication Quarterly 96 (3): 673–695. https://doi.org/10.1177/1077699019859901.
  • Brown, Nina I. 2019. “Artificial Authors: A Case for Copyright in Computer-Generated Works.” Science and Technology Law Review 20 (1): 1–41 https://doi.org/10.7916/stlr.v20i1.4766.
  • Campolo, Alexander, and Kate Crawford. 2020. “Enchanted Determinism: Power without Responsibility in Artificial Intelligence.” Engaging Science, Technology, and Society 6: 1–19. https://doi.org/10.17351/ests2020.277.
  • Caswell, David, and Konstantin Dörr. 2018. “Automated Journalism 2.0: Event-Driven Narratives: From Simple Descriptions to Real Stories.” Journalism Practice 12 (4): 477–496. https://doi.org/10.1080/17512786.2017.1320773.
  • Chinese State Council. 2017. “New Generation Artificial Intelligence Development Plan.” http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.
  • Chinese Supreme People’s Court. 2020. “Supreme People’s Court’s Guideline on Strengthening the Protection of Copyright and Copyright-Related Rights.” http://www.court.gov.cn/zixun-xiangqing-272221.html.
  • Cook, Sarah. 2020. “Beijing’s Global Megaphone - The Expansion of Chinese Communist Party Media Influence since 2017.” Freedom House. https://freedomhouse.org/report/special-report/2020/beijings-global-megaphone.
  • Czarniawska-Joerges, Barbara, and Guje Sevón. (Eds.). 1996. Translating Organizational Change. De Gruyter Studies in Organization 56. Berlin; New York: Walter de Gruyter.
  • Deuze, Mark. 2009. “The Media Logic of Media Work.” Journal of Media Sociology 1 (v 1/2): 22–40.
  • Diakopoulos, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the Media. Cambridge, MA: Harvard University Press.
  • Díaz-Noci, Javier. 2020. “Artificial Intelligence Systems-Aided News and Copyright: Assessing Legal Implications for Journalism Practices.” Future Internet 12 (5): 85. https://doi.org/10.3390/fi12050085.
  • DiMaggio, Paul J., and Walter W. Powell. 1983. “The Iron Cage Revisited: Collective Rationality and Institutional Isomorphism in Organizational Fields.” American Sociological Review 48 (2): 147–160. https://doi.org/10.2307/2095101.
  • Fang, Kecheng, and Maria Repnikova. 2021. “The State-Preneurship Model of Digital Journalism Innovation: Cases from China.” The International Journal of Press/Politics 27 (2): 497–517. https://doi.org/10.1177/1940161221991779.
  • Ferrer-Conill, Raul, and Edson C. Tandoc. 2018. “The Audience-Oriented Editor: Making Sense of the Audience in the Newsroom.” Digital Journalism 6 (4): 436–453. https://doi.org/10.1080/21670811.2018.1440972.
  • Genin, Aurora Liu, Justin Tan, and Juan Song. 2021. “State Governance and Technological Innovation in Emerging Economies: State-Owned Enterprise Restructuration and Institutional Logic Dissonance in China’s High-Speed Train Sector.” Journal of International Business Studies 52 (4): 621–645. https://doi.org/10.1057/s41267-020-00342-w.
  • Graefe, Andreas. 2016. “Guide to Automated Journalism.” Tow Center for Digital Journalism, Columbia University. 10.7916/D80G3XDJ.
  • Hansen, Mark, Meritxell Roca-Sales, Jonathan M. Keegan, and George King. 2017. Artificial Intelligence: Practice and Implications for Journalism. New York, NY: Tow Center for Digital Journalism. 10.7916/D8X92PRD.
  • He, Huifeng. 2015. “End of the Road for Journalists? Tencent’s Robot Reporter “Dreamwriter” Churns out Perfect 1,000-Word News Story - in 60 Seconds.” South China Morning Post, September 11. https://www.scmp.com/tech/china-tech/article/1857196/end-road-journalists-tencents-robot-reporter-dreamwriter-churns-out.
  • He, Tianxiang. 2019. “The Sentimental Fools and the Fictitious Authors: Rethinking the Copyright Issues of AI-Generated Contents in China.” Asia Pacific Law Review 27 (2): 218–238. https://doi.org/10.1080/10192557.2019.1703520.
  • Hinings, Bob, Thomas Gegenhuber, and Royston Greenwood. 2018. “Digital Innovation and Transformation: An Institutional Perspective.” Information and Organization 28 (1): 52–61. https://doi.org/10.1016/j.infoandorg.2018.02.004.
  • Hou, Rui. 2020. “The Commercialisation of Internet-Opinion Management: How the Market is Engaged in State Control in China.” New Media & Society 22 (12): 2238–2256. https://doi.org/10.1177/1461444819889959.
  • Kuai, Joanne, Bibo Lin, Michael Karlsson, and Seth C. Lewis. 2022. “From Wild East to Forbidden City: Mapping Algorithmic News Distribution in China through a Case Study of Jinri Toutiao.” Digital Journalism. https://doi.org/10.1080/21670811.2022.2121932.
  • Lewis, Seth C., Amy Kristin Sanders, and Casey Carmody. 2019. “Libel by Algorithm? Automated Journalism and the Threat of Legal Liability.” Journalism & Mass Communication Quarterly 96 (1): 60–81. https://doi.org/10.1177/1077699018755983.
  • Lewis, Seth C., and Oscar Westlund. 2015. “Actors, Actants, Audiences, and Activities in Cross-Media News Work: A Matrix and a Research Agenda.” Digital Journalism 3 (1): 19–37. https://doi.org/10.1080/21670811.2014.927986.
  • Liang, Fan, Yuchen Chen, and Fangwei Zhao. 2021. “The Platformization of Propaganda: How Xuexi Qiangguo Expands Persuasion and Assesses Citizens in China.” International Journal of Communication 15 (20): 1855–1874.
  • Lowrey, Wilson. 2018. “Journalism as Institution.” In Handbooks of Communication Science, Volume 19, edited by Tim P. Vos, 125–148. Boston, MA: Walter de Gruyter Inc.
  • Lu, Haiyan, Martin de Jong, and Ernst ten Heuvelhof. 2018. “Explaining the Variety in Smart Eco City Development in China-What Policy Network Theory Can Teach Us about Overcoming Barriers in Implementation?” Journal of Cleaner Production 196: 135–149. https://doi.org/10.1016/j.jclepro.2018.05.266.
  • Macfarlane, Alec, and Serenitie Wang. 2017. “Toutiao: China’s $11 Billion App That Wants to Organize the World’s Information.” CNNMoney, June 12. https://money.cnn.com/2017/06/12/technology/china-toutiao-news-app/index.html.
  • Marconi, Francesco. 2020. Newsmakers: Artificial Intelligence and the Future of Journalism. New York: Columbia University Press.
  • Meng, Bingchun. 2018. The Politics of Chinese Media: Consensus and Contestation. China in Transformation. New York, NY: Palgrave Macmillan.
  • Meng, Jing. 2021. “Discursive Contestations of Algorithms: A Case Study of Recommendation Platforms in China.” Chinese Journal of Communication 14 (3): 313–328. https://doi.org/10.1080/17544750.2021.1875491.
  • Negnevitsky, Michael. 2005. Artificial Intelligence: A Guide to Intelligent Systems. 2nd ed. Harlow, England; New York: Addison-Wesley.
  • Nelson, Jacob L. 2021. Imagined Audiences: How Journalists Perceive and Pursue the Public. Journalism and Pol Commun Unbound Series. New York: Oxford University Press.
  • Örnebring, Henrik, and Michael Karlsson. 2022. Journalistic Autonomy: The Genealogy of a Concept, University of Missouri Press.
  • Paschen, Ulrich, Christine Pitt, and Jan Kietzmann. 2020. “Artificial Intelligence: Building Blocks and an Innovation Typology.” Business Horizons 63 (2): 147–155. https://doi.org/10.1016/j.bushor.2019.10.004.
  • Pearlman, Russ. 2017. “Recognizing Artificial Intelligence (AI) as Authors and Investors under U.S. Intellectual Property Law.” Richmond Journal of Law & Technology 24 (2): i–38.
  • Pickard, Victor. 2020. “Restructuring Democratic Infrastructures: A Policy Approach to the Journalism Crisis.” Digital Journalism 8 (6): 704–719. https://doi.org/10.1080/21670811.2020.1733433.
  • Rhodes, R. A. W., and David Marsh. 1992. “New Directions in the Study of Policy Networks.” European Journal of Political Research 21 (1–2): 181–205. https://doi.org/10.1111/j.1475-6765.1992.tb00294.x.
  • Rhodes, R. A. W. 2007. “Understanding Governance: Ten Years On.” Organization Studies 28 (8): 1243–1264. https://doi.org/10.1177/0170840607076586.
  • Richardson, Jeremy. 2006. “Policy-Making in the EU: Interests, Ideas and Garbage Cans of Primeval Soup.” In European Union: Power and Policy-Making, edited by Jeremy Richardson, 3–31. Abingdon: Routledge.
  • Rosati, Eleonora. 2019. “Copyright as an Obstacle or an Enabler? A European Perspective on Text and Data Mining and Its Role in the Development of AI Creativity.” Asia Pacific Law Review 27 (2): 198–217. https://doi.org/10.1080/10192557.2019.1705525.
  • Ruan, Lotus, Masashi Crete-Nishihata, Jeffrey Knockel, Ruohan Xiong, and Jakub Dalek. 2021. “The Intermingling of State and Private Companies: Analysing Censorship of the 19th National Communist Party Congress on WeChat.” The China Quarterly 246: 497–526. https://doi.org/10.1017/S0305741020000491.
  • Russell, Stuart J., and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. 2nd ed. Prentice Hall Series in Artificial Intelligence. Upper Saddle River, NJ: Prentice Hall/Pearson Education.
  • Stockmann, Daniela. 2013. Media Commercialization and Authoritarian Rule in China. Cambridge; New York: Cambridge University Press.
  • Tandoc, Edson, Kristy Hess, Scott Eldridge, and Oscar Westlund. 2020. “Diversifying Diversity in Digital Journalism Studies: Reflexive Research, Reviewing and Publishing.” Digital Journalism 8 (3): 301–309. https://doi.org/10.1080/21670811.2020.1738949.
  • Thornton, Patricia H., and William Ocasio. 1999. “Institutional Logics and the Historical Contingency of Power in Organizations: Executive Succession in the Higher Education Publishing Industry, 1958–1990.” American Journal of Sociology 105 (3): 801–843. https://doi.org/10.1086/210361.
  • Thornton, Patricia H., and William Ocasio. 2008. “Institutional Logics.” In The SAGE Handbook of Organizational Institutionalism, edited by Royston Greenwood, Christine Oliver, Roy Suddaby, and Kerstin Sahlin-Andersson, 99–129. Los Angeles, CA: SAGE.
  • Thornton, Patricia H., William Ocasio, and Michael Lounsbury. 2012. The Institutional Logics Perspective: A New Approach to Culture, Structure, and Process. Oxford, UK: Oxford University Press.
  • Thurman, Neil, Konstantin Dörr, and Jessica Kunert. 2017. “When Reporters Get Hands-on with Robo-Writing: Professionals Consider Automated Journalism’s Capabilities and Consequences.” Digital Journalism 5 (10): 1240–1259. https://doi.org/10.1080/21670811.2017.1289819.
  • van den Hoven van Genderen, Robert. 2018. “Do We Need New Legal Personhood in the Age of Robots and AI?” In Robotics, AI and the Future of Law, edited by Marcelo Corrales, Mark Fenwick, and Nikolaus Forgó, 15–55. Perspectives in Law, Business and Innovation. Singapore: Springer Singapore. https://doi.org/10.1007/978-981-13-2874-9_2.
  • Vos, Tim P. 2019. “Journalism as Institution.” In Oxford Research Encyclopedia of Communication, edited by Tim P. Vos. Oxford University Press. https://doi.org/10.1093/acrefore/9780190228613.013.825.
  • Weeks, Lin. 2014. “Media Law and Copyright Implications of Automated Journalism.” NYU Journal of Intellectual Property and Entertainment Law 4: 67–93.
  • Westlund, Oscar, and Seth C. Lewis. 2014. “Agents of Media Innovations: Actors, Actants, and Audiences.” The Journal of Media Innovations 1 (2): 10–35.
  • Xi, Jinping. 2019. “Xi Jinping: Jiakuai tuidong meiti ronghe fazhan goujian quanmeiti chuanbo geju [Xi Jinping: Accelerate the Development of Media Convergence, Build an Omnimedia Communication].” March 15. http://www.gov.cn/xinwen/2019-03/15/content_5374027.htm.
  • Yanisky-Ravid, Shlomit. 2017. “Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era—the Human-Like Authors Are Already Here—A New Model.” Michigan State Law Review 659: 659–726.
  • Ye, Ying, and Mike Adcock. 2021. “Chinese Copyright Law and Computer-Generated Works in the Era of Artificial Intelligence.” In Algorithmic Governance and Governance of Algorithms: Legal and Ethical Challenges, edited by Martin Ebers and Marta Cantero Gamito, 157–167. Data Science Machine Intelligence and Law. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-50559-2_8.
  • Yin, Robert K. 2018. Case Study Research and Applications: Design and Methods. 6th ed. Los Angeles: SAGE.
  • Yu, Peter K. 2022. “The Long and Winding Road to Effective Copyright Protection in China.” Pepperdine Law Review 49 (3): 681–732.
  • Zhang, Daniel, Saurabh Mishra, Erik Brynjolfsson, John Etchemendy, Deep Ganguli, Barbara Grosz, Terah Lyons, et al. 2021. “The AI Index 2021 Annual Report.” Human-Centered AI Institute, Stanford University. https://aiindex.stanford.edu/report/.
  • Zhang, Shixin Ivy. 2019. “The Business Model of Journalism Start-Ups in China.” Digital Journalism 7 (5): 614–634. https://doi.org/10.1080/21670811.2018.1496025.
  • Zhao, Yuezhi. 2008. Communication in China: Political Economy, Power, and Conflict. Lanham, MD: Rowman & Littlefield Publishers.
  • Zheng, Haitao, Martin De Jong, and Joop Koppenjan. 2010. “Applying Policy Network Theory to Policy‐Making in China: The Case of Urban Health Insurance Reform.” Public Administration 88 (2): 398–417.
  • Zhou, Xueguang. 2010. “The Institutional Logic of Collusion among Local Governments in China.” Modern China 36 (1): 47–78. https://doi.org/10.1177/0097700409347970.