915
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Scrutinizing Algorithms: Assessing Journalistic Role Performance in Chinese News Media’s Coverage of Artificial Intelligence

ORCID Icon, ORCID Icon & ORCID Icon
Received 10 Jul 2023, Accepted 21 Mar 2024, Published online: 03 Apr 2024

ABSTRACT

Coverage of artificial intelligence and algorithms has been largely examined by scholars studying countries in the Global North that have historically supported conditions for critical journalism and watchdog journalistic role performances. However, it is unclear if the findings from such work would be applicable to authoritarian contexts that do not share those conditions. This study addresses this gap through a textual analysis of 23 journalistic investigations of AI and algorithmic systems published in Chinese news media between 2019 and 2023. We found that Chinese journalists were critical of multiple aspects of algorithmic systems and called for urgent AI governance at the nation-state level. Despite the technical nature of the issue, those journalists overwhelmingly employed traditional reporting techniques to uncover political and economic intersections—namely those resulting from the rise of tech companies and the algorithms they implement. Chinese journalists simultaneously performed the roles of being watchdogs and loyal facilitators by highlighting the risks posed by private platforms and their algorithms while casting the state as a protector and responsible steward of technological development. The study thus highlights the intricate roles necessary to perform critical journalism in authoritarian contexts, and the possibilities that the case of AI permits.

Introduction

Artificial Intelligence (AI) is becoming an increasingly salient part of everyday life. We see it used today in social domains ranging from criminal justice (Shi Citation2022) to education (Ouyang and Jiao Citation2021) to healthcare (Ruckenstein and Schüll Citation2017). However, AI is more than a material technological artifact. Its power also comes from how it is imagined by different stakeholders and how it is put to use by people and institutions (Bucher Citation2018). The combination of mythology and deployment grants AI-infused systems significant structuring power, not only in their ability to make potentially life-changing decisions at scale but also in their ability to have such decisions be accepted as legitimate by sizable segments of society (Just and Latzer Citation2017). Because they carry such power, it is crucial that the public is informed about who creates such systems, how those systems are put to use, what effect those systems have, and why those systems are or are not fair (Diakopoulos Citation2015).

The watchdog role and accountability functions that journalists can perform are important for keeping algorithmic systems in check and helping the public make sense of them (Diakopoulos Citation2019). How journalists go about doing that work can be conceptualized through the lens of journalistic role performance or the enactment of professional values and ideals by journalists within the bounds of a set of constraints (Mellado Citation2015; Mellado, Hellmueller, and Donsbach Citation2017). For example, journalists may be driven (and operate within an environment that enables them) to serve as a watchdog on such power and to interrogate those systems. Conversely, they may feel compelled (whether by their professional ideals or the lack of journalistic autonomy, or a combination thereof) to report in ways that advance the interests of institutional stakeholders.

AI also offers a particularly interesting case to study because it involves the intersection of strategic national interests, private enterprise, and mediated constructions of sociotechnical systems (Bucher Citation2017; Schellewald Citation2022). It further offers journalists the opportunity to subvert certain structural constraints imposed upon them by having them call attention to inequalities and questions of fairness through critiques of the algorithms that reflect institutional logics, rather than critiquing the institutions themselves. Such reporting may manifest itself through what has been termed critical journalism (Tong Citation2019), and in particular through an approach called algorithmic accountability reporting (Diakopoulos Citation2015; Citation2019) that aims to shed light on the so-called “black boxes” that characterize many algorithmic systems, often by adopting more technical methods.

While scholars have examined news coverage of artificial intelligence and algorithms (Cools, Van Gorp, and Opgenhaffen Citation2022; Köstler and Ossewaarde Citation2022), such work has focused on countries in the Global North that have historically supported conditions for critical journalism and have been guided by value systems that align with so-called watchdog journalism. However, it is unclear if the findings from such work would be applicable to authoritarian contexts like that of China, which is near the forefront of AI development but is highly distinct in terms of its media system (Hallin and Mancini Citation2011), journalistic culture (Mellado et al. Citation2017), and, potentially, its construction of AI as an algorithmic imaginary (Zeng, Chan, and Schäfer Citation2022).

This study addresses this gap and contributes to “de-Westernizing” our discipline (Waisbord and Mellado Citation2014) through a textual analysis of news coverage of AI and algorithms by Chinese news media. By qualitatively examining a set of 23 journalistic investigations of AI and algorithms that were highly read in Chinese social media between 2019 and 2023, this study sheds light on how algorithmic accountability reporting is conducted in China and which journalistic roles are performed in Chinese journalists’ investigation of the use of AI and algorithms in China. We contribute to the literature by highlighting how particular cases—such as that of AI—allow journalists working in authoritarian contexts to sidestep some environmental constraints and simultaneously perform their expected facilitative roles as well as more critical roles typically associated with liberal democratic systems.

Literature Review

Depictions of AI and Critical Journalism in China

A number of recent studies have examined how AI and algorithmic systems are depicted by news media. These studies are premised on two important notions. First, those objects are not just material technological artifacts; they become socially constructed through meaning-making processes that are, in part, mediated by journalists. These processes construct algorithmic imaginaries, or “the way[s] in which people imagine, perceive and experience algorithms” (Bucher Citation2017). Second, journalists’ understandings of AI and algorithmic systems are themselves shaped by the imaginaries they help shape. This fosters a cycle of mutual shaping and reification that frequently intersects with political and economic forces (Bareis and Katzenbach Citation2022; Jasanoff and Kim Citation2009).

These studies have made important contributions to the literature. For example, Cools, Van Gorp, and Opgenhaffen (Citation2022) found that U.S. journalistic coverage of AI has become more optimistic over time. They also found that a more utopian picture was painted in topics related to work, health, and sports, with more dystopian depictions in topics related to politics. Köstler and Ossewaarde (Citation2022) found that German outlets mediated the sociotechnical visions of AI by endorsing the German government’s framing of AI’s economic potential. However, German media also frequently called for alternatives to challenge established power structures and to consider different political designs.

It is notable that these studies focus on depictions of AI in liberal democracies within the Global North, where journalism is expected to “speak truth to power.” That approach to journalism, which centers on the critique of elites and institutional failures, has been termed critical journalism (Tong Citation2019). However, while critical journalism may appear natural and desirable in liberal democracies, it is often impossible to practice under the tightly controlled media systems found under authoritarian regimes.

The context of China illustrates the evolution of these constraints and how journalists navigate them. Since the Chinese economic reform that began in the late 1970s, Chinese media have undergone decades of commercialization, conglomeration, and convergence (Meng Citation2018; Stockmann Citation2013). This has resulted in a dual press system of official media and commercial media (Meng Citation2018), though some scholars also observe an in-between category of semi-official outlets (Stockmann Citation2013). Notably, the media marketization forces initiated in the 1980s and accelerated in the 1990s created conditions to support the propagation of critical journalism (Tong Citation2019). However, hardline media policies enacted over the past two decades under the regime of Xi Jinping and the parallel tightening of the grip on state-sanctioned ideology have once more made it difficult to practice critical journalism (Hu Citation2023; Svensson Citation2017). Journalists who wish to do so thus risk being sanctioned or must find “side doors” for conveying critiques.

Scholars have found that the discourse around AI in China is often dominated by state and industry actors who boost AI’s positive economical and political potential (Zeng, Chan, and Schäfer Citation2022). The less-salient critical discourses are driven by cultural elites on social media rather than journalists, and those discussions are frequently informed by Western sources and shaped through international deliberations (Mao and Shi-Kupfer Citation2023). However, there are few examinations of journalistic depictions of AI in China and none that examine the application of critical journalism to that case. We thus ask the following research question:

RQ1: What critiques of AI and algorithmic systems did Chinese journalists offer in their investigations of those objects?

Algorithmic Accountability Reporting

One way to enact critical journalism within the context of AI is to engage in what Diakopoulos (Citation2019, 207) has called algorithmic accountability reporting, or journalistic work that aims to provide “descriptions, explanations, and sometimes even justifications for the behavior of decision-making algorithms, particularly in cases where there was a fault or error.” Diakopoulos (Citation2019) points to three main techniques for practicing this kind of reporting. The first involves directly inspecting the material (i.e., technical) aspects of the algorithm(s). This is difficult both because such algorithms are almost always proprietary and highly guarded, and because such an analysis would require a level of technical expertise that journalists rarely have. The second focuses on interviewing individuals closely associated with the development of said algorithm(s) (e.g., software engineers), who can shed light on some of those “black-boxed” components and logics. This is similarly difficult because such individuals may be reluctant to speak about such proprietary work or be bound by non-disclosure agreements. The third option is to attempt to reverse engineer aspects of an algorithm or system by simulating inputs or contrasting results in order to identify the inputs that algorithms are most sensitive to or the logics they apply.

Diakopoulos (Citation2019) also identifies four common types of stories related to algorithmic accountability reporting: uncovering discrimination and unfairness; identifying inaccurate predictions and classifications; explaining violations of laws or social norms; and highlighting human misuse of algorithms. Building on these “algorithmic watchdogs,” Napoli (Citation2021, 378) discusses the parallel emergence of a “platform beat” that “is more narrowly focused on the use and operation of the digital platforms that play an increasingly central role in the dissemination or curation of news and information,” among other aspects of social life. Schwinges et al. (Citation2023) similarly found that US and German news outlets served as critical but passive observers over “Big Tech,” characterizing them as “a tame yet increasingly growling watchdog” (13). As many scholars have observed, given the extensive role that algorithms and platform companies play in everyday social life—and the growing salience of AI in particular—it is crucial to interrogate them using both traditional techniques as well as those associated with algorithmic accountability. Collectively, these techniques not only draw attention to the power of algorithms and their less apparent features but can also provide accessible explanations of their inner workings and the stakeholders who stand to gain and lose from their applications. Thus, our second research question asks:

RQ2: Which techniques associated with algorithmic accountability reporting are featured in Chinese journalists’ investigations of AI and algorithmic systems?

Journalistic Role Performance

Over decades, journalism scholars have developed a robust body of literature examining the development of professional values and norms, and how those systems translate into (or diverge from) actual practice. Journalistic role performance offers a particularly helpful theoretical tool for examining that process and empirically tracing the value-laden characteristics of journalistic outputs (Mellado Citation2015). Journalistic role performance has been defined as “the collective outcome of concrete newsroom decisions and the style of journalistic reporting, considering different constraints that influence and enable journalism as a professional practice” (Mellado et al. Citation2017, 5). As such, role performances focus on what journalists do, which is often different from what they say they do and the values they report holding (Mellado and Van Dalen Citation2014).

The role performance framework proposed by Mellado (Citation2015) includes six distinct roles situated within three conceptual domains:

Within the domain of journalistic voice, the presence of an active voice—characterized by the use of first-person voice, charged adjectives, the taking of sides, and presentation of resolutions—signifies an interventionist role wherein journalists advocate for specific causes or groups (Kijratanakoson Citation2023). Conversely, the absence of a journalistic voice reflects a more detached and non-interventionist (or neutral disseminator) role, aiming to maintain objectivity in a potentially volatile cultural context.

The power relations dimension—which is of particular interest to this study—focuses on the relationship between journalists and those in power, and serves as the parent to two additional roles. The watchdog role involves holding power accountable by questioning institutional actors and giving voice to contrasting ones, drawing attention to institutional failures and malfeasance, and scrutinizing processes that impact the public. The loyal-facilitator role highlights the perspectives of those in power, promotes national and regional policies and the progress being made toward them, and protects the images of institutional actors. This role is especially prominent in contexts—which include many Asian countries—where the state intervenes directly and/or indirectly within media industries, such as by owning large media companies or imposing strict media laws (Mellado et al. Citation2017). It is also important to note that although these two roles are frequently described as oppositional, they are independent and can coexist (Humanes et al. Citation2021)—as evidenced by the context of Singapore, which offers relatively little freedom of the press (Tandoc and Duffy Citation2016; Wu Citation2022).

The audience approach dimension pertains to how journalists view their audience. It serves as the umbrella for the three remaining roles. The civic role encourages and provides information to support citizen participation in public life, educates citizens about their duties and rights, and platforms their questions and demands. The service role provides information, knowledge, and advice about goods, services, and opportunities applicable to news audiences’ daily lives. The infotainment role aims to make news more entertaining by utilizing narratives, emotion-laden elements, and personalization, and may involve emphasizing topics more closely associated with audiences’ private lives and leisure activities.

These dimensions of role performance are not just reflections of how journalists enact their values. They are also interconnected with different stakeholders’ expectations of the institution of journalism and the constraints imposed by various political, economic, and cultural structures (Mellado et al. Citation2017; Mellado, Hellmueller, and Donsbach Citation2017). Moreover, the roles are not mutually exclusive in theory or in practice, meaning that multiple roles may be performed simultaneously by an actor or in the construction of a particular news story (Humanes et al. Citation2021). While earlier research found that Chinese journalists generally adhered to a loyal-facilitator role (Pan et al. Citation2001), recent studies have painted a more complex picture characterized by negotiations over power enacted via delicate dances that intersect with state, market, technological, and audience forces (Meng Citation2018; Ren and Dan Citation2022).

These intersections are evident in the case of AI, which is regarded as a crucial component within China’s national strategic plan. China aims to become a global AI superpower by 2030 (Chinese State Council Citation2017) and is keen on taking the lead in setting the norms in the nascent global AI governance regimes (Cheng and Zeng Citation2022; Veale, Matus, and Gorwa Citation2023). Accomplishing this involves negotiation and coordination among multiple stakeholders (Zeng Citation2022), including state officials, journalistic actors, software developers, platform companies, and the Chinese public. As is the case elsewhere, algorithmic systems developed by state and market actors are frequently touted as “technological fixes” to many social problems (Bareis and Katzenbach Citation2022). This leads us to our final research question:

RQ3: Which journalistic role performances are emphasized in Chinese journalists’ investigations of AI and algorithmic systems?

Method

We conducted a qualitative textual analysis of Chinese-language news articles written by Chinese journalists that investigate algorithms and AI to identify evidence of critical journalism and algorithmic accountability reporting techniques while assessing “how different dimensions of professional roles materialize in journalistic outputs” (Mellado Citation2015, 610). Our dataset consisted of 23 Chinese news articles that were classified as journalistic investigations by two of the authors—who are native Chinese speakers—and were published on WeChat between January 2019 and May 2023 (see ). Press releases, official statements, non-investigative news reports, translations, and items that did not substantially engage with AI or algorithmic systems more broadly were excluded. We sampled the articles from WeChat because it has become the most important channel for Chinese people to receive, discuss, and share news (Xu Citation2022), with its technological affordances aimed at facilitating the increasingly mobile social media news use in China (Peng and Miller Citation2023).

Table 1. List of news stories in the sample.

We gathered the data through a combination of search queries and snowball sampling within the WeChat ecosystem. First, we selected the “article” column in the Sogou-WeChat web search engine and queried “algorithm(s)” and “AI” in Chinese. This yielded over 45,000 results that were sorted by an algorithm that weighed factors like relevance, publication date, and popularity. Although the search engine did not offer a filter for news articles, we noticed that several impactful journalistic pieces were ranked high in the search results. We therefore selected 11 investigations that matched our sampling criteria by going through the first 30 pages of the search results, and then used the media accounts that published those articles as seed accounts for locating other items that matched our criteria. Those 12 additional investigations were sometimes original works by the seed accounts and sometimes reposts of works from other media outlets, which the seed amplified.

These investigations were all published by media outlets that branded themselves as either general news outlets or specialists in the finance and technology industries. About half of the articles were published by traditional outlets that established their reputations before the digital era. The other half was produced by emerging new media outlets that are “native” to the social media environment. We did not restrict the publication dates—and the sample therefore includes some older stories—as we sought to represent what WeChat was highlighting to users at the time of the study.

We then qualitatively analyzed each investigation, focusing on the way(s) in which technologies, institutional actors, national policies, and consequences were described, evaluated, and topically situated. We also looked for evidence of particular reporting techniques, both within and outside of the text (e.g., methodological notes). Finally, we were particularly sensitive to references—implicit and explicit—to structuring forces identified in the scholarly literature, such as China’s non-democratic media system, its focus on AI as a strategic national objective, and the growing influence of platform giants.

After evaluating each article on an individual basis, we reviewed our notes and identified recurring elements. We then reviewed each investigation once more with a particular sensitivity to those identified elements and to see if any new related phenomena of interest emerged. For each round, the two researchers held meetings and collaborated closely to discuss their observations. This process was repeated until we reached saturation for the phenomena of interest within our sample.

Findings

Critiques of AI and Algorithmic Systems

RQ1 asked about the critiques that Chinese journalists offered of AI and algorithmic systems in their investigations. We found evidence of the application of critical journalism in nearly all of the articles. These included examinations of the social implications of those objects, such as the dominance of mechanical logic, the objectification of everyday life, and the reduction of human agency. These, in turn, promoted an algorithmic imaginary that cast algorithms and AI as objects of discipline, structuring peoples’ lives in predominantly dystopian ways.

Dominance of Mechanical Logic

Algorithms were depicted as possessing autonomous agency and therefore capable of “invading” people’s lives, “taking control” of workflows, and even “ruling” over certain domains. These portrayals raised concerns about the potential domination of humans by seemingly rational machines and in turn emphasized a desire to return to humanism. For example, in Articles #2 and #9 (see ), algorithms were equated with the practice of quantification and the ideal of unadulterated rationality within the context of a quintessential human activity: romance.

Objectification of Everyday Life

The articles painted a broad picture of how the widespread implementation of algorithms had led to the objectification of everyday life. For example, Article #13 sought to illustrate how people were reduced to demographic and behavioral data to be encoded into labeled features and fed into opaque black box models, creating a world where decisions and outcomes were driven by incomprehensible processes designed to govern data points. Similarly, Article #14 highlighted how influencers were being victimized by algorithmic logics, risking their personal well-being to produce content they believed recommendation algorithms would reward.

Reduction of Human Agency

A third recurring theme was that of human agency being reduced. This manifested itself most clearly in the terminology that was used in the articles. For example, in Article #3, the journalist noted that the metaphor of workers “being trapped in systems” in a prior story resonated deeply with many readers and accurately captured the experience of living within modern, algorithmically driven systems that prioritize efficiency at the cost of agency.

These articles collectively reflected and reconstructed an algorithmic imaginary that cast algorithms and AI as objects of discipline, which is evocative of dystopian depictions of futuristic—and now contemporary—life in many novels and films. In this context, the concept of discipline carries a dual meaning: humans are increasingly subjected to the discipline imposed by algorithms, and in response to this looming threat, novel approaches must be devised to discipline the algorithms. AI was depicted as exerting influence over humans both in overt ways, such as enforcing progress towards performance goals, as well as subtler ways, like engaging in opaque matchmaking processes. The notion of mechanical objectivity was not presented through the lens of fairness but rather as dehumanizing actions. More broadly, the investigations promoted the construction of algorithms that warrant resistance or, at the very least, necessitate strong and benevolent guardians to ensure proper governance.

Applications of Algorithmic Accountability Reporting

RQ2 asked about the techniques associated with algorithmic accountability reporting that were featured in the investigations. We found a highly limited application of the technical techniques that have been proposed as being more unique to algorithmic accountability reporting, though there was ample evidence of the use of traditional reporting techniques.

Limited Applications of Technical Methods

The investigations indicated that Chinese journalists who are interested in AI and algorithmic systems, including those who employ investigative methods, may possess relatively limited technical know-how or capacity to either audit computer code or reverse-engineer algorithmic systems. Notably, the majority of the reporters appeared to have backgrounds in social or business reporting—there was no evidence of technical backgrounds. The lone clear attempt to “audit the algorithm” appeared in a story wherein the journalist used taxi-hailing apps on different mobile phones to assess whether the phone’s operating system was factored into the pricing for a ride. While that reverse-engineering approach was no doubt rudimentary, it nevertheless was a clear attempt to systematically evaluate the potentially discriminatory behaviors of algorithmic decision-making.

The reporter used a number of mobile phones with different prices to conduct multiple tests. It was found that using the Didi Taxi app to call an express car at the same place and at the same time, the waiting time is less for iPhone users than for Android users. In the test, the reporter also found many times that the estimated price shown for iPhone users is more expensive than for Android mobile phone users. Although the price difference was often within 0.5 RMB, this problem occurred 5 times in the 12 tests. (Article #10)

Extensive Use of Traditional Reporting Methods

Nearly all of the articles examined utilized traditional reporting methods, such as field research, interviews with the subjects of the journalistic investigations, expert accounts, and quoting publicly available documents. The use of quantitative data was also evident in several of the stories. Such methods were sufficient for tackling a range of story types associated with algorithmic accountability reporting, including discrimination and unfairness, inaccurate predictions and classifications, violations of laws and social norms, and human misuse of algorithms. However, while they shed light on those topics, they arguably did not permit—or, at minimum, did not result in—detailed examinations of how those algorithms functioned or low-level explanations of how they exerted their power over users and citizens. For example, one journalist published a methodological supplement to their article explaining that her first instinct was to investigate the system by interviewing the people immersed in it, which is a traditional interviewee-centered method commonly seen in investigative journalism.

During that time, I went to the streets and ran up to chat with delivery riders whenever I saw them sitting on the side of the road resting. In fact, the efficiency was very low and the success rate was not high. They always wondered if I tried to investigate them on behalf of the platforms; or they went for an order in the middle of the conversation. Even if I left my contact information, it was still hard to maintain a deep communication. Occasionally, I met a few people who were willing to talk, which was lucky. I decided to change my mind and look for riders online. (Methodological supplement to Article #3)

Notably, the journalists appeared to rely mostly on the insights of subject experts, and rarely included information attributed to software developers or system engineers. While those experts may have possessed stronger technical skills than the journalists, they also lacked access or the ability to peek into the “black boxes” that characterize the majority of highly prevalent algorithms. In other words, while the collaboration between journalists, experts, and social groups plays a pivotal role in enabling investigative algorithmic reporting in China, such collaboration is likely to prove insufficient to truly elucidate the algorithmic subjects of the reporting. The above journalist encapsulated this well in their methodological supplement to that same article notes:

To gain a deeper understanding of the “system,” after reading and studying several papers by these two scholars, we contacted Dr. Ping Sun, one of the first scholars in China to conduct research on delivery riders and the algorithms behind them. … There are also regrets. During the interview process, we contacted the engineers on the algorithmic technology team of Meituan and Ele.me, but most of them refused the interview requests using the excuse of “company confidentiality.” For this part, we had to do our best to search for and go through the relevant public documents and interviews to restore and elaborate on the basic principles of the systems. (Methodological supplement to Article #3)

Manifestation of Journalistic Role Performances

RQ3 asked about the journalistic role performances that were emphasized in the investigations. While there were performances of all six of the roles identified by Mellado (Citation2015), three of them were more prominent: service, watchdog, and loyal-facilitator. These roles sometimes manifested alongside one another, with the loyal-facilitator role taking on a fairly unique tint amid clear performances of the service and watchdog roles.

The Service Role

The investigative pieces analyzed adopted a mix of an audience-centered approach by catering to individual users’ needs while also fostering civic discussions. Some of the articles focused on how individual users resisted algorithmic control in their daily lives and offered practical tips on how to identify algorithmic harms and evade harmful algorithms. The journalists repeatedly chronicled personal experiences and relatable situations that raised awareness of the proliferation of algorithms and how they manifest in specific aspects of audiences’ everyday lives. In doing so, they raised clear concerns pertaining to data privacy and algorithmic manipulation. For example, Articles #5 and #12 illustrated stories of young users who consciously disconnected from algorithmically driven systems and were able to resist some forms of algorithmic harm by engaging in alternative practices.

These young people know that in today’s society, it is impossible to completely separate from algorithms; they can only try to ‘fight the algorithm’ in their own way … Some people use methods such as ‘not logging in, not liking, not following, not commenting’ to minimize their online traces; others employ multiple cell phones and numbers to escape potentially addictive online environments; some even create different accounts for different scenarios. (Article #5)

The Watchdog Role

The news stories examined a range of topics, such as the amplification of existing biases, price discrimination, surveillance, and labor control. To differing extents, they exposed underlying power structures that supported and were enabled by algorithmic systems and used an active (interventionist) voice to call for civic attention and collective actions against inequalities. Within these stories, journalists also gave voice to ordinary citizens, especially members of social groups that were adversely affected by particular algorithms, and brokered a discussion about their predicaments and the potential changes that were necessary to address harms. These reports emphasized that algorithmic issues are not merely personal matters but are of public concern and are, in turn, influenced by broader power dynamics.

Half Moon Talk’s reporter has uncovered concerning issues in the online car-hailing industry. Behind the chaotic billing practices are the complex regulations and platform’s “algorithmic harvesting” of unsuspecting consumers.

Tang Jiansheng, deputy secretary-general of the Shanghai Municipal Consumer Protection Committee, highlighted the alarming trend of “exploiting loyal customers” using big data. … “Product matching and price discrimination based on these discriminatory algorithms represent an abuse of algorithmic power by the platforms and a direct violation of consumers’ right to fair trading,” said Tang. (Article #10)

Some reports even succeeded in promoting civic action, subjecting big technology companies to public scrutiny, and compelling them to make changes. For example, Article #3 drew public attention to the precarious working conditions of delivery workers and resulted in tangible changes. The investigation began by examining car accidents involving food delivery workers and traced the issues back to algorithmic systems that gamified and accelerated labor processes. This sparked public outrage over the exploitation of food delivery workers by platforms, triggering heated discussions about digital labor and the gig economy. In response, two major food delivery platforms pledged to adjust their systems to create safer working conditions (Sun Citation2020).

The Loyal-Facilitator Role

The news reports we analyzed also clearly illustrated how the watchdog role must sometimes co-exist with the loyal-facilitator role, highlighting the complexities of critical journalistic performances within a tightly regulated media system. Not only did the reports focus on the implications of algorithms and AI applications within the private sector—there was barely any examination of applications within the public sector—they also tended to cast actors associated with the state as the very protectors from those misguided, if not oppressive, applications. In other words, journalists performed the watchdog role when it came to tech giants and private enterprises but also served as a loyal facilitator for the state. For example, some articles presented policy developments as powerful solutions for regulating the problematic algorithms found in the private sector and evaded equivalent examinations of how those national policies were formulated and their potentially detrimental consequences.

In a significant step toward curbing the negative effects of recommendation algorithms, actions have been taken to address the issue. On March 1, 2022, the “Internet Information Service Algorithm Recommendation Management Regulations” were officially implemented. This pioneering move, jointly issued by the Ministry of Industry and Information Technology, the Cyberspace Administration of China, and other concerned departments, has set a global precedent for algorithm regulation.

Subsequently, as of March 28, various popular apps … have responded by introducing buttons to stop personalized content and advertising recommendations. This means users now have the choice to take control and say goodbye to the fear of being excessively influenced by algorithms. (Article #13)

Discussion and Conclusion

Our analysis led to three main findings. First, the investigations depicted AI and algorithmic systems in highly critical ways, demonstrating Chinese journalists’ willingness to engage in critical journalism—but only when it involved critiques of non-state actors (e.g., private platform companies). Second, the journalists relied on traditional reporting methods to cover topics associated with algorithmic accountability reporting but did not use its more technical techniques. Third, the selective performances of the service and watchdog roles allowed for the simultaneous performance of the loyal-facilitator role, namely by casting the state as the benevolent authority that could control and prevent harms through effective governance.

Critical Reporting on AI and a Market-Driven Dystopia

In contrast to the US (Cools, Van Gorp, and Opgenhaffen Citation2022), where an increasingly utopian picture of AI has been painted in relation to work and health, Chinese journalists’ depiction of Chinese AI is noticeably more dystopian—at least in instances where the application of AI and algorithmic systems is being driven by private companies. Additionally, both German and Chinese journalists have tended to endorse their states’ positive depictions of the development of AI and its potential (Köstler and Ossewaarde Citation2022). However, unlike their German counterparts, Chinese journalists did not challenge the state’s role in and contribution to established power structures or question the national AI policies championed by state actors. Nevertheless, Chinese journalists are engaging in critically examining the social implications of AI and exposing some of its “dark sides,” which contributes to a crucial and much-needed counter-public sphere that aims to influence China’s vision for AI and the strategies it pursues (Zeng, Chan, and Schäfer Citation2022). In other words, Chinese journalists are contributing to the construction of an algorithmic imaginary (Bucher Citation2017) that is potentially troublesome—especially if left at the hands of purely market forces—and therefore demands thoughtful stewardship.

To some extent, the rapid development of AI in China and the centrality of AI to the state’s vision of it as a strategic asset has necessitated the enactment of critical journalism. While Chinese journalists remain subject to the intensifying political power of the state (Tong Citation2019), they are still finding opportunities to be critical and hold power accountable through their underscoring of concrete examples demonstrating how AI and algorithmic systems can objectify human life and reduce human agency. Though it might be premature to say that the case of AI provides an important pathway for the rebirth of critical journalism in China (see Svensson Citation2017), our study shows that at least some Chinese journalists are still finding space to engage in critical journalism.

Holding Algorithms Accountable and China’s Platform Beat

Scholars like Diakopoulos (Citation2019) have highlighted the value of using technical reporting methods like code audits and reverse engineering to examine algorithms that are, at least materially, technical objects. The very limited use of such methods here is no doubt concerning as it raises questions about the technical competency of the journalistic actors whose work on AI was highly visible on social hubs like WeChat. The lack of such competency surely makes it easier to promote a depiction of AI that is less rooted in reality, and certainly does not permit the opening of algorithmic “black boxes” (see Diakopoulos Citation2019).

However, there is also value to the journalists’ choice to eschew an analysis of the technology in favor of its social implications and the political economy of the environment that algorithmic systems in China operate within. First, such analyses are exhibitions of algorithmic accountability reporting despite their reliance on traditional methods because they focus on issues pertaining to algorithmic unfairness, inaccurate classifications, and the violation of social norms (Diakopoulos Citation2019). Second, by focusing on the broader system that involves different kinds of stakeholders, elucidates procedural linkages, and highlights actual harms being experienced (and the ways they evade them), these investigations may be more effective in holding power accountable and promoting prosocial change that avoids techno-solutionism (Veale, Matus, and Gorwa Citation2023). Our analysis shows that Chinese journalists’ reporting of big tech often gives voices to the “common people” and flesh-and-blood characters, which contrasts against the corporate, political, and regulator sources that Schwinges et al. (Citation2023) identified in coverage by news media from the Global North. Giving voice to individuals who are being directly impacted by the technologies, such as delivery drivers, the reporting advances audience-centered role performances not often associated with Chinese journalism.

Of particular note is the extent to which, in the case of AI in China, the “algorithms beat” that Diakopoulos (Citation2019, 208) writes about intersects with the “platform beat” that Napoli (Citation2021, 378) has identified. While the two are conceptually similar, they were effectively interchangeable in the investigations we analyzed. In other words, algorithmic accountability became an element of platform accountability in the investigations. This is important because it sets a too-narrow target for who is to be governed—private platform companies—by regulations that aim to correct misbehavior, all the while failing to shed light on the many other sites of activity and decision-making that are structured or impacted by AI (see Ouyang and Jiao Citation2021; Ruckenstein and Schüll Citation2017; Shi Citation2022).

The Coexistence of the Watchdog and Loyal-Facilitator Roles

The performance of the watchdog role is most often associated with liberal democracies and in particular countries in the Global North (Humanes et al. Citation2021; Mellado et al. Citation2017). However, as other scholars have pointed out (e.g., Hu Citation2023; Ren and Dan Citation2022; Tong Citation2019), Chinese journalists can also perform this role when covering emerging topics and are sometimes given license to scrutinize some institutional actors. AI presents one such case, with Chinese journalists adopting an active voice in calling out failures on the part of private enterprise. In this way, our findings show that they can at the very least engage in partial performances of the watchdog role.

At the same time, it was rather clear that the watchdog did not have eyes for government actors, which is crucial to the idealized performance of that role (Mellado Citation2015; Mellado, Hellmueller, and Donsbach Citation2017). This is surely due in no small part to long-standing restrictions imposed by the Chinese Communist Party on journalistic activity that are entirely independent from particular topics (Hu Citation2023; Svensson Citation2017). However, the case of AI does present a particular wrinkle to the performance of that role, and critical journalism more broadly. Under the Chinese State Council’s (Citation2017) media guidelines, journalists are instructed to “publicize the new progress and effectiveness of AI, so that the healthy development of AI becomes the consensus of the whole society in order to mobilize the whole society to participate in supporting the development of AI.” In order to survive professionally—and arguably to ensure their personal freedom—Chinese journalists must perform the loyal-facilitator role. While that performance may involve some genuinely voluntary acts tied to a journalistic value system that promotes nation-building and social stability (see Mellado Citation2015), they also surely include a significant measure of coercion driven by the positioning of AI as a key national interest.

The consequence of this manifestation of coexistence is that Chinese journalists’ ability to establish chains of accountability for algorithmic systems is severely constrained to the point of being unable to adequately challenge the foundations of China’s increasingly algorithmic society. Some prosocial change has no doubt resulted from investigations of the harm caused by platform companies’ implementation of particular algorithmic systems, even in our qualitative sample (Sun Citation2020). Moreover, it can be argued that drawing attention to such failings on the part of private enterprise or the technical limits of a strategic asset (AI) provides an avenue for implicitly critiquing the state. However, the fact that the state’s own instrumentalization of the technology—arguably, the most structuring application of AI in citizens’ lives—is not featured in the coverage cannot be ignored. And, when problems and the most dystopian depictions of algorithmic systems, state officials were cast as knowledgeable protectors ready to refine a still-worthy strategic objective through careful governance. In this way, the investigative reporting we examined actually helps pave the way for increasing the Chinese state’s legitimacy in determining the future of AI governance and reining in the power of private platform companies.

More broadly, we see in the case of AI in China important elements of Jasanoff and colleagues’ (Jasanoff Citation2015; Jasanoff and Kim Citation2009) observations about the intersection of sociotechnical imaginaries and state objectives. Our findings point to the ways in which imaginaries also reflect and advance state objectives, with repeated references to the power of these technologies and the “pioneering” Chinese policies needed to govern them. This matters because imaginaries are not just cognitive and emotional frameworks that apply to individuals (Bucher Citation2017; Schellewald Citation2022). They are also resources that can be mobilized to advance policy goals pertaining to strategic interests (Bareis and Katzenbach Citation2022; Jasanoff and Kim Citation2009), like having China be at the forefront of “the AI revolution.”

In light of this, we argue that the theorizing of journalistic role performance in authoritarian contexts would benefit from more intricate conceptualizations of power relations, especially when they involve the case of a strategic objective—as with AI in China—by incorporating insight from fields like Science and Technology Studies. Such theorizing might draw from either qualitative or quantitative examinations that elucidate how and the extent to which journalists are able to subvert restrictions on their performances, and perform multiple roles simultaneously or on a conditional basis. This would permit more fine-grained and instructive accounts of journalistic role performances outside of the Global North. Such work could also pair analyses of content with interviews or focus groups to examine journalistic role narrations and, namely, how journalists think about the opportunities and obstacles to performing certain roles when covering particular objects that are salient in both state and market spheres (e.g., AI and algorithmic systems).

This study has a few important limitations that merit noting and that can be taken into account with future work. First, the absence of a well-documented news database and the opaque digital infrastructure of news dissemination in China made it difficult to collect and sample the data in a systematic way. Our dataset is therefore far from exhaustive, especially if the goal is to offer generalizable findings across multiple media, and it did not include content produced by Chinese state media. Second, our study also offers a snapshot overview of the algorithmic accountability reporting in recent years at the point of data collection but does not evaluate the evolution of journalistic role performances over time. While our study includes a few articles from 2019 and 2020, it is intended to only highlight the coverage that was highly visible on WeChat at the time of our data collection. Finally, we have limited our study to China and did not explore other authoritarian contexts that present either similar or distinct sociopolitical considerations (or political priorities). Theoretical development in this area would thus benefit from case studies of other authoritarian contexts to refine our understanding of the complex journalistic role performances being conditioned by the evolving societal and political contexts in such places.

Acknowledgments

The authors would like to thank the China Media & Culture ECR Network for introducing the first and second authors and the Ander Centre for Research on News and Opinion in the Digital Era (NODE) at Karlstad University for bringing the third author to the team. The authors would also like to thank the anonymous reviewers, the special issue editors, and the Journalism Practice Editorial Team for their valuable feedback that helped to improve this article.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The first author would like to thank the support from the Letje Lips Amsterdam Merit Scholarship. The second author would like to thank the support from the Anne Marie och Gustav Anders Stiftelse för mediaforskning.

References

  • Bareis, J., and C. Katzenbach. 2022. “Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics.” Science, Technology, & Human Values 47 (5): 855–881. https://doi.org/10.1177/01622439211030007.
  • Bucher, T. 2017. “The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms.” Information, Communication & Society 20 (1): 30–44. https://doi.org/10.1080/1369118X.2016.1154086.
  • Bucher, T. 2018. If … Then: Algorithmic Power and Politics. Oxford University Press.
  • Cheng, J., and J. Zeng. 2022. “Shaping AI’s Future? China in Global AI Governance.” Journal of Contemporary China, 1–17. https://doi.org/10.1080/10670564.2022.2107391.
  • Chinese State Council. 2017. New Generation Artificial Intelligence Development Plan. http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.
  • Cools, H., B. Van Gorp, and M. Opgenhaffen. 2022. “Where Exactly between Utopia and Dystopia? A Framing Analysis of AI and Automation in US Newspapers.” Journalism 25 (1): 3–21. https://doi.org/10.1177/14648849221122647.
  • Diakopoulos, N. 2015. “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.” Digital Journalism 3 (3): 398–415. https://doi.org/10.1080/21670811.2014.976411.
  • Diakopoulos, N. 2019. Automating the News: How Algorithms are Rewriting the Media. Harvard University Press.
  • Hallin, D. C., and P. Mancini. 2011. Comparing Media Systems Beyond the Western World. Cambridge University Press.
  • Hu, Y. 2023. “How Media Resources and Power Relations Define Critical Reporting in China: A Longitudinal Analysis of The Beijing News’ Corruption Coverage between 2004 and 2018.” Journalism Studies 24 (11): 1377–1397. https://doi.org/10.1080/1461670X.2023.2216789.
  • Humanes, M. L., C. Mellado, C. Mothes, H. Silke, P. Raemy, and N. Panagiotou. 2021. “Assessing the Co-occurrence of Professional Roles in the News: A Comparative Study in Six Advanced Democracies.” International Journal of Communication 15:1–22. https://doi.org/10.46300/9107.2021.15.1. https://ijoc.org/index.php/ijoc/article/view/16760.
  • Jasanoff, S. 2015. “One. Future Imperfect: Science, Technology, and the Imaginations of Modernity.” In Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power, edited by S. Jasanoff and S. Kim, 1–33. Chicago: University of Chicago Press. https://doi.org/10.7208/9780226276663-001.
  • Jasanoff, S., and S.-H. Kim. 2009. “Containing the Atom: Sociotechnical Imaginaries and Nuclear Power in the United States and South Korea.” Minerva 47 (2): 119–146. https://doi.org/10.1007/s11024-009-9124-4.
  • Just, N., and M. Latzer. 2017. “Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet.” Media, Culture & Society 39 (2): 238–258. https://doi.org/10.1177/0163443716643157.
  • Kijratanakoson, N. 2023. “Journalistic Role Performance of the Thai Press on the Issue of Transgender Rights.” International Journal of Communication 17:1–23. https://ijoc.org/index.php/ijoc/article/view/20220.
  • Köstler, L., and R. Ossewaarde. 2022. “The Making of AI Society: AI Futures Frames in German Political and Media Discourses.” AI & Society 37 (1): 249–263. https://doi.org/10.1007/s00146-021-01161-9.
  • Mao, Y., and K. Shi-Kupfer. 2023. “Online Public Discourse on Artificial Intelligence and Ethics in China: Context, Content, and Implications.” AI & Society 38 (1): 373–389. https://doi.org/10.1007/s00146-021-01309-7.
  • Mellado, C. 2015. “Professional Roles in News Content: Six Dimensions of Journalistic Role Performance.” Journalism Studies 16 (4): 596–614. https://doi.org/10.1080/1461670X.2014.922276.
  • Mellado, C., L. Hellmueller, and W. Donsbach, eds. 2017. Journalistic Role Performance: Concepts, Contexts, and Methods. New York: Routledge.
  • Mellado, C., L. Hellmueller, M. Márquez-Ramírez, M. L. Humanes, C. Sparks, A. Stepinska, S. Pasti, A.-M. Schielicke, E. Tandoc, and H. Wang. 2017. “The Hybridization of Journalistic Cultures: A Comparative Study of Journalistic Role Performance.” Journal of Communication 67 (6): 944–967. https://doi.org/10.1111/jcom.12339.
  • Mellado, C., and A. Van Dalen. 2014. “Between Rhetoric and Practice: Explaining the Gap between Role Conception and Performance in Journalism.” Journalism Studies 15 (6): 859–878. https://doi.org/10.1080/1461670X.2013.838046.
  • Meng, B. 2018. The Politics of Chinese Media: Consensus and Contestation. Palgrave Macmillan.
  • Napoli, P. M. 2021. “The Platform Beat: Algorithmic Watchdogs in the Disinformation Age.” European Journal of Communication 36 (4): 376–390. https://doi.org/10.1177/02673231211028359.
  • Ouyang, F., and P. Jiao. 2021. “Artificial Intelligence in Education: The Three Paradigms.” Computers and Education: Artificial Intelligence 2:1–6. https://doi.org/10.1016/j.caeai.2021.100020.
  • Pan, Z., C.-C. Lee, J. M. Chan, and C. K. Y. So. 2001. “Orchestrating the Family-nation Chorus: Chinese Media and Nationalism in the Hong Kong Handover.” Mass Communication and Society 4 (3): 331–347. https://doi.org/10.1207/S15327825MCS0403_05.
  • Peng, Z., and S. Miller. 2023. “An Examination of How Social and Technological Perceptions Predict Social Media News Use on WeChat.” Journalism Practice 17 (3): 554–573. https://doi.org/10.1080/17512786.2021.1925948.
  • Ren, C., and V. Dan. 2022. “Frames and Journalistic Roles in Chinese Reporting on HIV: Insights from a Content Analysis and Interviews Focused on Verbal and Visual Modalities*.” Journalism Studies 23 (11): 1327–1349. https://doi.org/10.1080/1461670X.2022.2084145.
  • Ruckenstein, M., and N. D. Schüll. 2017. “The Datafication of Health.” Annual Review of Anthropology 46 (1): 261–278. https://doi.org/10.1146/annurev-anthro-102116-041244.
  • Schellewald, A. 2022. “Theorizing ‘Stories about Algorithms’ as a Mechanism in the Formation and Maintenance of Algorithmic Imaginaries.” Social Media + Society 8 (1): 1–10. https://doi.org/10.1177/20563051221077025.
  • Schwinges, A., T. G. L. A. van der Meer, I. Lock, and R. Vliegenthart. 2023. “The Watchdog Role in the Age of Big Tech – How News Media in the United States and Germany Hold Big Tech Corporations Accountable.” Information, Communication & Society. Advance online publication, 1–22. https://doi.org/10.1080/1369118X.2023.2234972.
  • Shi, J. 2022. “Artificial Intelligence, Algorithms and Sentencing in Chinese Criminal Justice: Problems and Solutions.” Criminal Law Forum 33 (2): 121–148. https://doi.org/10.1007/s10609-022-09437-5.
  • Stockmann, D. 2013. Media Commercialization and Authoritarian Rule in China. Cambridge University Press.
  • Sun, J. 2020. “Chinese Food Delivery Platforms Embroiled in Controversy Over Responses to Popular Investigative Story.” Pandaily, September 10. https://pandaily.com/chinese-food-delivery-platforms-embroiled-in-controversy-over-responses-to-popular-investigative-story/.
  • Svensson, M. 2017. “The Rise and Fall of Investigative Journalism in China: Digital Opportunities and Political Challenges.” Media, Culture & Society 39 (3): 440–445. https://doi.org/10.1177/0163443717690820.
  • Tandoc, E., and A. Duffy. 2016. “Keeping up with the Audiences: Journalistic Role Expectations in Singapore.” International Journal of Communication 10:1–21. https://ijoc.org/index.php/ijoc/article/view/4565.
  • Tong, J. 2019. “The Taming of Critical Journalism in China.” Journalism Studies 20 (1): 79–96. https://doi.org/10.1080/1461670X.2017.1375386.
  • Veale, M., K. Matus, and R. Gorwa. 2023. “AI and Global Governance: Modalities, Rationales, Tensions.” Annual Review of Law and Social Science 19 (1): 255–275. https://doi.org/10.1146/annurev-lawsocsci-020223-040749.
  • Waisbord, S., and C. Mellado. 2014. “De-westernizing Communication Studies: A Reassessment.” Communication Theory 24 (4): 361–372. https://doi.org/10.1111/comt.12044.
  • Wu, S. 2022. “Asian Newsrooms in Transition: A Study of Data Journalism Forms and Functions in Singapore’s State-mediated Press System.” Journalism Studies 23 (4): 469–486. https://doi.org/10.1080/1461670X.2022.2032802.
  • Xu, V. W. 2022. “WeChat and the Sharing of News in Networked China.” Digital Journalism 10 (9): 1441–1463. https://doi.org/10.1080/21670811.2022.2053335.
  • Zeng, J. 2022. Artificial Intelligence with Chinese Characteristics: National Strategy, Security and Authoritarian Governance. https://doi.org/10.1007/978-981-19-0722-7
  • Zeng, J., C. Chan, and M. S. Schäfer. 2022. “Contested Chinese Dreams of AI? Public Discourse about Artificial Intelligence on WeChat and People’s Daily Online.” Information, Communication & Society 25 (3): 319–340. https://doi.org/10.1080/1369118X.2020.1776372.