833
Views
0
CrossRef citations to date
0
Altmetric
Research Article

What Motivates Audiences to Report Fake News?: Uncovering a Framework of Factors That Drive the Community Reporting of Fake News on Social Media

Abstract

The circulation of fake news on social media platforms has drawn increasing concern. At this point, the community reporting of fake news remains a key mechanism used by these platforms to identify information to block or label as misleading. Yet, little is known about the factors that motivate or dissuade the use of this mechanism and its perceived effectiveness to combat fake news. This study utilises focus groups of social media users aged 21 to 60, located within the global city of Singapore. Results showed that six factors influenced the decisions of audiences to report the news they perceived as fake, namely the nature of the post, nature of the source, reactions from and impact on others, subject interest and knowledge, cultural norms, and consequences of reporting; they also perceived low effectiveness of this mechanism to curb fake news spread. Findings reveal that not all perceived misinformation encountered by users will be reported, signalling issues with the community reporting of fake news mechanism, and that this function will only have optimal outcomes if supplemented by news literacy programmes that include “misinformation reporting” and managing its hurdles as a key component.

Introduction

Social media is increasingly used by audiences to receive news. As audiences migrate away from traditional media sources to online platforms like Facebook, Twitter, Instagram, TikTok and LinkedIn, these platforms have become popular avenues through which they obtain their news (Gottfried and Shearer Citation2016). This news may be generated by established news outlets, or individuals or groups with no journalistic training, who are now able to disseminate their news to the masses as “citizen journalists” (Curran Citation2011). While the rise of citizen journalism has been lauded for its ability to create an informed citizenry, encourage political participation and deliberation, and improve the health of the public sphere and democracy in general (Curran Citation2011), it has also drawn concerns for the increased generation of fake news that is able to gain extensive attention online and mislead audiences (Barthel, Mitchell, and Holcomb Citation2016).

Fake news has been defined by scholars as news that is “intentionally and verifiably false and could mislead readers” (Allcott and Gentzkow Citation2017, 213). At this point, research pertaining to the dissemination of fake news on social media has centred on what fake news looks like (Tandoc et al. Citation2018; Zimdars and McLeod Citation2020), the reasons for this dissemination (McNair Citation2018; Humprecht Citation2019), the influence of fake news on democracy (McNair Citation2018), audience reactions to fake news regarding believability (Balmas Citation2014; Shu and Liu Citation2019), sharing (Apuke and Omar Citation2021), and authentication (Tandoc et al. Citation2018), and fake news regulation and media literacy interventions (Jang and Kim Citation2018), among others.

One area, however, remains an unknown – that is, what motivates or dissuades audiences to report what they perceive as fake news to the social media platform. While there are studies that discuss factors influencing behaviours of audiences to correct others, themselves, or the post, such as third-person perceptions and perceived norms (Koo et al. Citation2021), the perceived impact on others (Sun et al. Citation2022), social identity threats (Cohen et al. Citation2020) and interpersonal relationships, issue relevance and personal efficacy (Tandoc, Lim, and Ling Citation2020), factors influencing audience desires to click “report” on social media when they believe they have encountered fake news have received much less attention. This is an important question, given that social media platforms like Facebook and Twitter use community reporting of fake news as a key mechanism to determine what information to block or label as misleading (Facebook Citation2021; Twitter Citation2021), apart from receiving leads from the third-party information verifying companies they hire (Hutchinson Citation2019). In 2019, the top 100 fake news stories on Facebook were viewed 150 million times (Gilbert Citation2019), highlighting the extent to which the community can play a part to report false news.

So what motivates audiences to not just ignore what they consider as fake news or choose not to spread it, but to feel the obligation to take action and report it as fake? What characteristics of that news warrants it to be severe enough to prompt this (not reactionary) action? And what important implications might be tied to such findings? This study aims to answer these questions through focus groups conducted online on 77 social media users across different age groups, genders and ethnicities, recruited in Singapore, a “global city” of Asia and an international media hub. The final analysis will also offer an assessment on the effectiveness of the community reporting of fake news.

Fake News, Social Media, and Causes for Concern

The impact of fake news on society has been well-documented in recent scholarship, with scholars referring to negative consequences such as distrust in the media, polarisation of society, and ultimately, the undermining of democracy (Del Vicario et al. Citation2016; Glaser Citation2017). According to Jang and Kim (Citation2018), as ways of distinguishing between misinformation and actual news become blurred, the citizenry may become less instead of more informed. This is tied to an increasingly complex range of influences on how news credibility is perceived in the digital age – Fisher (Citation2018) notes the tension between audiences wanting to trust the news while experiencing scepticism that the information might be false. In Hofseth’s (Citation2017) view, the idea that the media may offer only one out of many possible truths causes the public to doubt the credibility of the press. This false information, they feel, may stem from a multitude of different players online, such as domestic politicians (seen to be “most responsible” for spreading false and misleading information online), as well as political activists, journalists, ordinary people, and foreign governments (Reuters Institute Citation2020, 18).

When audiences mistake false information for authentic news, this influences their formation of beliefs, which may lead to polarisation of society. Del Vicario et al. (Citation2016, 554) found that rumours and misinformation spread online created ideological echo chambers, defined as “homogenous and polarised communities having similar information consumption patterns”, where members would “process information through a shared system of meaning” and practice “selective exposure to content”. These clusters are further strengthened by confirmation bias, or the fact that people are inclined to believe information that aligns with their pre-existing beliefs (Gimpel et al. Citation2021). In this way, the spread of fake news reinforces communities’ biased framing of narratives, deepening polarisation (Del Vicario et al. Citation2016).

According to the 2020 Reuters Institute Digital News Report, social media is the biggest source of concern about misinformation (40%), compared to messaging apps like WhatsApp and Facebook Messenger (14%), based on the responses of more than 80,000 people from 40 countries spanning Europe, the Americas, Asia Pacific and Africa. Of the respondents, 29% revealed they were most concerned about Facebook, followed by YouTube (6%) and Twitter (5%) (Reuters Institute Citation2020, 20). Given this concern about social media – and contained within this category they also listed Instagram, TikTok, Snapchat and LinkedIn – it becomes worthy to investigate these platforms further.

Currently, social media platforms like Facebook, Instagram, Twitter, TikTok, Youtube and LinkedIn depend significantly on user reporting to reduce the spread of fake news – such news gets sent to third-party fact-checkers when users report the post as false (Gimpel et al. Citation2021). According to one of Facebook’s fact-checking partners, Full Fact (Citation2019, 9), Facebook provides fact-checkers with a “queue” of content made up of those flagged by users as suspicious, apart from those identified by their algorithms. TikTok similarly removes or limits content or accounts that contain misinformation only after they have been reported by users or identified by their own “specially trained teams” (TikTok Safety Centre Citationn.d.). Twitter launched its new reporting feature in August 2021, alongside its use of automation and proactive monitoring of tweets, in order to “identify patterns of misinformation” − as of January 2022, over 3.7 million user-submitted reports were already received since the feature’s debut (Perez Citation2022). While Facebook has not revealed specific numbers on users who have clicked to report misinformation, it reported that it took down 1.3 billion fake accounts in the last quarter of 2021; it also removed 12 million pieces of content that reported falsehoods about Covid-19 and its vaccines after it was alerted to them (Rosen Citation2021), suggesting the extent of the problem and the significant role that users can play to help social media platforms contain the spread of fake news.

What Counts as “Fake News” May Differ: The Meaning-Making Process of Audiences

Given the negative consequences that fake news has on society, it is crucial to understand how audiences engage with such content. According to Tandoc et al. (Citation2018), when encountering potentially fake news, readers first conduct internal acts of authentication, where they cross-check with the knowledge they already have, their own intuition, or their own experiences. Jahng, Stoycheff, and Rochadiat (Citation2021, 5) note that these methods of processing information are “systematic”, where the logic of the information presented is “supplemented by individuals’ experiences”, or by the individual’s knowledge based on past experiences with news. Agreeing is Wagner and Boczkowski (Citation2019), who noted that news consumers tend to rely on traditional fact-based media, personal experience and knowledge, fact-checking across different sources, and trust in personal contacts on social media to navigate a media landscape filled with misinformation.

Not surprisingly then, audiences may not view the same news as “fake”. When they draw on their own experiences and knowledge, the repetition of news across the news outlets they are exposed to, and the personal contacts they trust as suitable “assessors of news quality” (Wagner and Boczkowski Citation2019, 871), they may perceive the veracity of a post in different ways. This is especially since audiences tend to selectively expose themselves to information that aligns with their own values and beliefs (Scheufele and Krause Citation2019), to reduce their feelings of “cognitive dissonance” that arise from coming across information that challenges their worldviews (Festinger, Citation1957). They may then find themselves in echo chambers where the same information gets repeated, even those containing falsehoods (Sunstein Citation2001).

Indeed, what audiences have been found to regard as “fake” might sometimes cross over from news that is clearly fabricated to include “poor journalism” (i.e., journalism that is superficial, inaccurate and sensationalist), “propaganda” (i.e., hyperpartisan content and politicians lying or using spin in their public relations efforts), and “advertising” that pertains to sponsored or advertised content, according to Nielsen and Graves (Citation2017, 3). They conclude that there is no clear agreement on where the line is between fake news and just a typical news report. In fact, the meaning-making process behind audiences determining if a piece of news is fake or not is influenced by a whole host of personal and contextual factors, and the nature of the post itself.

Factors Influencing Perceived “Fakeness”

These factors include, for instance, the stances that audiences take on issues (Tsang Citation2022) and the political biases they have (van der Linden, Panagopoulos, and Roozenbeek Citation2020; Jahng, Stoycheff, and Rochadiat Citation2021), as well as the existing ideologies or preconceived notions of the topic at hand (San Martín, Drubi, and Rodríguez Pérez Citation2020) – conservative circles in the US, for instance, are more inclined to believe news that are damaging to the image of the liberal movement and more likely to associate the mainstream media with the term “fake news”, given their allegiance to Donald Trump as US President and their greater belief in conspiracy theories (van der Linden, Panagopoulos, and Roozenbeek Citation2020). In the same vein, partisanship might make it less likely for audiences to fact-check those that they agree with politically (Allen, Martel, and Rand Citation2022).

Audiences may also look to the post itself to ascertain if it is potentially fake, such as the presence or absence of a source and who the source is (Arpan and Raney Citation2003). According to Tandoc et al. (Citation2018), the more well-known and established the news source and the more logical and unemotional the message, the more likely audiences will believe the information. Shu and Liu (Citation2019) also note that the nature and content of the news itself affect audience perceptions – users recognise that there are certain visual and textual cues that are commonly used in fake news that are not found in authentic news, such as emotional phrases and sensational images. Shu and Liu (Citation2019) suggest as well that features such as videos tend to increase a news story’s credibility.

A huge factor in the meaning-making process of audiences is also tied to “endorsement cues” attached to a post such as likes and shares (Luo, Hancock, and Markowitz Citation2022); more of these tend to reassure audiences that the news is probably true. Online comments attached to posts also influence audience perceptions. For instance, Waddell (Citation2018) found that negative comments lowered bandwagon support for stories, and Rösner, Winter, and Krämer (Citation2016) discovered that negative uncivil comments increased hostile emotions tied to the news. Similarly, Pang and Ng (Citation2017, 450) note that the presence of tweets that explicitly oppose and/or correct a post causes greater awareness that the news might be fake and reduces the spread of this information drastically.

Noting that what audiences perceive as “fake news” may vary from person to person, scholars have pointed to news literacy programmes that can help audiences recognise fake news by “offering knowledge and skills to resist or critically interpret fake news stories”; this is usually done through the “identification, location, evaluation, and use of information” (Jones-Jang, Mortensen, and Liu Citation2021, 373). Ashley, Maksl, and Craft (Citation2013) urge the consideration of authors and audiences, messages and meanings, and how messages represent and/or filter reality, in the creation of a news media literacy framework.

While there is great concern on audience ability to decipher fake news, it is worthy to note here that not everyone falls prey to falsehoods online. Nelson and Taneja (Citation2018, 3732) note that light Internet users tend to refer to established sources for their news, but it is the heavy users that might venture out into the “long tail of available news media” and “expose themselves to niche offerings like fake news”. That said, users that consume news from various sources are better able to gauge authenticity (Balmas Citation2014, 439), since they are able to cross-check their judgement with other media, institutions or trusted sources (Tandoc et al. Citation2018). Youths are also more likely to use Google as a search engine to check the veracity of information as Papapicco, Lamanna, and D’Errico (Citation2022) discovered in their focus groups with 41 Italian adolescents, while middle aged news consumers may have a higher level of critical media literacy, understanding more the context and effects of the news, as Trninić, Vukelić, and Bokan (Citation2022) discovered in their focus group study in Bosnia and Herzegovina.

Taking Action on Falsehoods – or Not

The way audiences assess the believability of news will, in turn, likely impact how they respond to it. Actions that a user can take after reading a piece of news include liking, commenting, sharing, and importantly, reporting posts they perceive as fake. Users choosing to click the “report” function on social media platforms will prompt the information to be checked, flagged, or removed (Gimpel et al. Citation2021). Community reporting or flagging of fake news is a common tactic, or “online civic intervention” (Porten-Cheé, Kunst, and Emmer Citation2020), used by these platforms to combat fake news.

Currently, studies on factors that cause audiences to specifically report a news piece as fake are few, although some studies pertaining to the reporting of other types of content such as hate speech reveal influencing factors like social norms of users, moral and political orientations, tolerance of online peer deviance and nature of the comment itself to justify its intention (Wilhelm, Joeckel, and Ziegler Citation2020). Audiences, for instance, might report a hate speech post on moral grounds, to ensure the well-being of others and to defend vulnerable social groups (Kunst et al. Citation2021).

Those studies that do centre on misinformation reporting tend to focus in on specific hypotheses that certain factors might be of influence. Gimpel et al. (Citation2021) for instance examined the relationship between the motivation of audiences to report a post as fake and “social norm” messages that accompany the posts, and discovered that users are more likely to click “report” when they encounter messages which portray this as socially desired behaviour.

Notably, Tandoc, Lim, and Ling (Citation2020) looked into how social media users respond to fake news and found that ignoring the fake news was the most common reaction; this study revealed that only 12.2% of respondents chose to report the post as fake on social media. At this point, there has yet to be a study that comprehensively examines the slew of factors that may influence audience reporting of fake news to the social media platform, and what may cause them to not do so. This is a noteworthy investigation, particularly since Gimpel et al. (Citation2021, 197) have discovered that “users never or only rarely report fake news” due to the “bystander effect”, where people’s willingness to help is reduced when more people are present. This study will target the reporting of fake news to the social media platform specifically, given that community reporting of fake news remains the primary mechanism through which social media platforms determine what information to block or label as misleading.

To this end, this study is guided by these two research questions, with an assessment on implications of the findings in the final analysis:

RQ1: What factors motivate or dissuade users to report on news that they perceive as fake?

RQ2: What is the perceived effectiveness of this report function to curb the spread of fake news?

Methodology

This study used focus groups as a research method – particularly useful for generating dynamic discussions that see participants approach topics from different angles and offer insights into meaning-making (Trninić, Vukelić, and Bokan Citation2022), this method can aid in the collection of rich data from a sizeable sample while accounting for certain differences between groups such as age, income, educational level, etc (Morgan Citation1997). Eight focus groups were formed, with about nine to 10 individuals participating in each focus group – a total of 77 participants were involved in this study. These groups were recruited through personal and professional contacts of the research team through snowball sampling, and from a call-out on social media platforms like Facebook and Instagram and messaging apps like Whatsapp and Telegram to allow diversity in the sample; effort was made to ensure that participants hailed from diverse age groups, genders and ethnicities.

The groups were sorted by age – a decision that stemmed from existing literature noting that age provided distinguishable differences to reactions to fake news (Papapicco, Lamanna, and D’Errico Citation2022; Trninić, Vukelić, and Bokan Citation2022) and was more easily discoverable than factors like personality and political bias. The groups were sorted into the age ranges of 21 to 30, 31 to 40, 41 to 50 and 51 to 60, and two groups were gathered for each age range (see ). All the groups indicated usage of the social media platforms of Facebook, Instagram, Twitter, LinkedIn, TikTok, and Youtube.

Table 1. Study participants.

These groups were recruited in Singapore – Singapore stands out as a prime locale for investigation, given that it is a global city located within Asia, known to be a part of the world that stands at the forefront of technological innovation. Singapore’s population is also extremely well plugged in and technologically savvy, with a high Internet penetration rate of over 80%; its Smart Nation initiative began nationwide in 2014 with the goal of improving lives through technological use. Singapore is also one of the leading nations in the world that has enacted an anti-fake news law where falsehoods online could be ordered to be taken down or have corrections placed alongside them (Tham Citation2019); this may potentially have allowed the citizenry to become more mindful of the topic. The most popular social media platforms in Singapore, not including messaging apps, are Facebook, used by a high 79.4% of Internet users, followed by Instagram, used by 66.3% (The Global Statistics Citation2022). These factors suggest that the Singapore population could offer good insights into how global citizens would react when faced with fake news.

The focus groups were all conducted by the same moderator online over Zoom in mid-2021 – Covid-19 restrictions were in place that limited face-to-face meetups at that time – and each session was digitally recorded for transcribing purposes. Each focus group lasted about an hour. Focus group questions asked respondents for their news consumption habits on social media platforms, their reactions to news that they are unfamiliar with and what they classify as “fake news”, their awareness and use of the “report” function on social media platforms that can alert the site to the post, and the factors that might motivate or prevent them from using this function, informed by the personal, contextual and post-related factors revealed in the literature review.

Once the focus groups were completed, each session was transcribed and transcripts subsequently coded line by line, with each new code compared to the one before it. These codes were then placed into unifying conceptual bins, from which a list of themes were generated that could address the research question (Tracy, Citation2013). A thematic analysis offers a qualitative and detailed overview of data and is often used for analysis of data collected through focus groups; as an inductive approach, codes and categories are not predefined but are instead formed during the data analysis process, to enable concepts and conclusions to surface (Braun and Clarke Citation2006).

Study Findings

For a start, when participants were asked what they defined as “fake news”, four key words emerged: “untrue”, “misleading”, “scams”, or “unsubstantiated”. These pertained to posts that contained outright falsehoods, had the potential to mislead or deceive, or contained claims that were not based in fact, such as Covid-19 vaccinations used by governments as a means to plant mind control chips. When they encounter posts with such characteristics, they would then make determinations on whether they want to report them or not. The participants did not make distinctions between different social media platforms in their decision-making process, tending to speak about them across the board.

Factors That Motivate or Dissuade from Reporting

Nature of the Post

Participants indicated that they would be more inclined to report a fake news post if there was “abusive” or “offensive” language, or if the post was “provocative”, “hurtful” or “hateful”. These could pertain to issues such as race, religion, human rights, conflicts and wars that might “create emotional distress” in some groups and “cause some form of disharmony” in the social fabric. This is particularly concerning in societies like Singapore that are multicultural. Describing this as “very harmful or dangerous”, one participant from FG1i said:

If it’s a politically charged event like the Israeli-Palestine conflict and people share hateful stories or hateful opinions about the crisis that are inaccurate, those are the kinds of things I will report. Or if it is an inaccurate fact about gay rights with hateful comments on them, I will report it.

If a post makes claims that are “unsubstantiated”, thereby causing it to be misleading, this will also increase participants’ likelihood to report it. The presence of photos and videos as visual proof does not automatically mean the post is truthful and substantiated either, because these may be photoshopped or manipulated. In fact, as one participant from FG2ii said, “photos and videos are so much more captivating than text and they tend to get a lot more views” – hence, if they are false, it increases his likelihood of reporting the post.

Nature of the Source

Participants agree that if they perceive the source of the information to belong to a fake account where it is unclear who has written it and there is “no author profile”, then this increases their likelihood to report it. A majority of the participants also agreed that a famous person like a celebrity or politician sharing news that is false will also trigger them to report it, because they have “more influence” and “more potential to reach out to others”. They should therefore be accurate with what they are posting because “people hold them to that”. As one participant from FG1i said:

It would bother me more if it’s someone famous because they’re not using the platform in the right way to spread accurate news. So I will more likely report those people versus small content creators, because their reach is big. They should have thought about their actions a lot better.

That said, some participants indicated that famous or important people are likely to be “more honourable” and “will not jeopardise their reputation to say something fake” – this might therefore reduce their likelihood to report on this person because as one participant from FG3i says, “they will be self-regulating what they are saying”. Another participant from FG3ii also practices caution to report on such people, because they “don’t want to get themselves into trouble”.

Reactions from and Impact on Others

Reactions from other social media users also influence the likelihood of reporting. While participants generally note that a post with many negative comments questioning its authenticity will cause them to report it, several respondents also noted why this would dissuade them to do so. As one participant from FG1ii points out, they “don’t see the need to report because it is blatantly problematic, so other people would have already reported it”. There is a general attitude of “leaving it to others to report” if there is already a lot of people reacting strongly to it.

However, if a fake news post has garnered several positive comments, this will increase the desire to report the post. As a participant from FG1ii notes:

If other people are showing support and liking the information and it is being spread around carelessly, and I am the one who knows that something is blatantly wrong but maybe this information is only privy to me and a few others, then I will report it.

Additionally, a balance of positive and negative comments attached to a fake news post may cause one of two reactions – to report it as fake because participants feel the need to “validate” for others that it is fake, or to not report it because “social media will moderate itself and everybody has the ability to express what they need to”.

The decision to report is also tied to the impact of the post to others. Participants say if the impact is “serious” or “affects a lot of people”, for instance, concerning the lives or property of others, then this increases their desire to report it. This impact may also be ascertained from the reactions of other social media users. As a participant from FG2i says, “if a lot of people are going like ‘wow, we should try this’, and there is going to be a lot of followers”, this will cause them to report the fake post. Participants also say they will more likely report fake news that “directly affect their family and friends”, for instance health-related information during the Covid-19 pandemic, or scams they see online that trick others to surrender their personal information or money.

Subject Interest and Knowledge

Participants shared that they would likely report a fake news post only if it is a matter that they are “passionate about” or that “concerns them”, such as human rights issues or climate change. Otherwise, they are likely to just ignore it. Only with news they are interested in will they do further investigations with more credible sources like mainstream news media, international organisations, or fact-checking websites.

Choosing to report a fake news post is also tied to subject knowledge. Participants say they will only report it if they are “very sure it is really fake”; if there is no way to verify the post, then they would not report it. As a participant from FG3i puts it, he does not want to “try to act smart” and click to report it unless he is 100% sure.

Cultural Norms

Several participants pointed to cultural norms as influencing their reporting decisions. A participant from FG4ii attributes this to “Asian culture generally, where we don’t want to speak up, we don’t want to ‘make noise’”. Participants say there is a tendency to read posts but “don’t contribute or do anything” and just be “a bit nosey about it but not take action”, leaving it to “other people who would want to respond”. Another participant from FG4ii echoes these views:

It depends on whether you have concrete facts to back up the report, and also whether you can remain anonymous or not. There is a fear of being singled out or have your reactions made known to people. This is related to our culture, because we tend to follow. We don’t want to be the outstanding one, to be someone different, you see.

There is also a fear associated with “getting into trouble if you report something [erroneously]”, according to a participant from FG2i, or “offending people”. Agreeing is a participant from FG4i:

You don’t want to be known to have reported something. How the Singapore climate is makes us very careful about what we say, in case it is wrong information. Unlike the Western world where they’re very outspoken, we tend to be more reserved and just say “never mind, let it be”. Behind closed doors we might start complaining, but out there, we’re don’t want to get into trouble.

However, some participants say this attitude differs by individuals, and that depending on “personality”, some people can be “keyboard warriors and type anything [they want]”. A participant from FG3i points out that in Singapore, people are in fact more likely to report a fake post if it relates to “sensitive topics” like race or religion:

We are more sensitive and more critical about certain topics. For us, we don’t want to get entangled in this type of conversation, we don’t want it to affect what we have built over the years – our social harmony, our respect. So we’d report it, stop it there and then.

In fact, some participants noted that it is their “social responsibility” or “moral obligation” to report on fake news and “educate people on the truth because half-truths or untruths can have severe consequences”. As a participant from FG2ii says, “I feel I have to do what is right”.

Consequence of Reporting

Participants say a deterrent to them reporting a fake news post is the uncertainty about the impact of that move – they are unsure “what difference they are making” and it would be “pointless if nothing happens”. As a participant from FG1i notes, while social media users may know the consequence of their reporting if they “actively monitor” what happens – such as checking back on the Twitter post for a warning notice, or looking at their “Support Inbox” on Facebook or “Support Requests” on Instagram for the result of their report – there is still uncertainty on “how useful their actions would be”. This might deter users from wanting to take that step, since “reporting something is like making a police report or lodging an official complaint, so it makes the stakes a little higher for people to use this function”, as a participant from FG2i says.

Participants also point to the concern that such reports will not stop people intent on posting misinformation and that “even if actions are taken, it will not stop them”. As a participant from FG1i says:

There have been several times when I see people on Instagram or TikTok say, “they took down my content so I’m posting this again”. So I’m thinking even if I report something, these people have the freedom to post something else similar. Maybe they will just cut a few words or photos and post it again. So this makes me not want to report it.

Perceived Effectiveness of Community Reporting

Participants do not perceive high effectiveness when it comes to the community reporting practice on social media because “not many people actually know about or use this function”, and they may not be aware of “what happens after they click report”. There are no statistics to indicate how many fake news posts have been detected and removed that way.

What the social media platforms can do is also limited because “the onus is on the user on how they want to conduct themselves online” and considering the viral nature of fake news posts, there is “demand for fake news online”. Social media users that report on such posts might themselves not be right in the first place and mistake “what is true and what is not”, especially if they are “emotionally entangled in the topic”. Additionally, some participants feel it is not the “responsibility of the platforms to sieve out these posts”, since social media should “provide a space for people to share whatever they want”.

That said, participants feel there is “room to improve and get towards less fake news”, and that the report function is a good place to start. As a participant from FG2i says, “The report function is probably the best they can do because it gives social media users a sense that they can do something about it”. The next step would then be to educate people on this report function, so that more people can be aware of it and use it when needed.

Discussion and Conclusion

This project began with the goal of establishing a framework of factors that influences social media users to click on the “report” function of social media platforms when they believe they have encountered fake news, given that the community reporting of fake news remains a key mechanism used by these platforms to identify information to block or label as misleading. These factors that motivate and dissuade reporting and the perceived effectiveness of this specific mechanism have yet to be researched on, even while there are studies that examine the factors that influence correction behaviours more generally, such as those that prompt the correction of others and the users themselves (Tandoc, Lim, and Ling Citation2020; Bode and Vraga Citation2021; Koo et al. Citation2021) and the denouncing of the post (Cohen et al. Citation2020).

Results for RQ1, obtained from the 77 focus group participants based in the global city of Singapore, pointed to six key factors that influenced audience decisions to report perceived fake news (see ), namely (1) nature of the post, (2) nature of the source, (3) reactions from and impact on others, (4) subject interest and knowledge, (5) cultural norms, and (6) consequences of reporting.

Table 2. Factors influencing decisions to report fake news on social media.

This study reveals, importantly, that it is not enough for information to be “fake” according to scholarly definitions i.e., intentionally and verifiably false and potentially misleading (Allcott and Gentzkow Citation2017) that would push users to report on it, but rather, that this action of reporting is motivated by a web of complex factors. Given that “fake news” may be defined differently by different people (Nielsen and Graves Citation2017), the audience needs to first identify this news as “fake”, and even then, not all verifiably fake news may be reported, suggesting a flaw in the current mechanism of user reporting and a need to boost this with education in news literacy.

Indeed, placing the results of this study within existing literature, six points of concern emerge regarding the community reporting of fake news as a misinformation curbing mechanism.

First, this study reinforces the notion that audiences might not define “fake news” the same way (Nielsen and Graves Citation2017), and this may influence their reactions to them. A person may view a story with outright falsehoods as fake and therefore be motivated to report it, but not one with claims that might be unsubstantiated; another person might view unsubstantiated claims as enough to label the news as fake and therefore report it. Differing definitions mean that the subsequent action of reporting is not consistent. Additionally, it is a fact that not everyone will report the fake news they come across – this study’s framework of factors reveals several reasons why people might not dedicate the time for this, for instance, if they feel they do not care about the subject or have enough knowledge on it to make a judgement on its “fakeness”, if the post does not seem to have great impact or garner much support, if they see it is posted by a famous person and they do not want to get into trouble, and if they are not confident about whether their reporting action will have an impact.

Second, this study also shows that some people might not have the ability to recognise a post as fake, even if it does contain falsehoods (Scheufele and Krause Citation2019). Concepts such as “nature of the source”, “reactions from others” and “subject knowledge and interest” are highly subjective to each person’s lived reality, and will likely differ between individuals. For instance, a believer of anti-vaccine conspiracy theories during the Covid-19 pandemic may view alternative media sources that advocate the same views as trustworthy, have friends who react positively to conspiracy theories as well, and believe they have sufficient knowledge to ascertain that a certain theory is true. The subjective nature of some of the factors uncovered in this study signal an issue with the community reporting mechanism. It requires that a substantial part of the population be able to recognise the “fakeness” of a post and take action to report it, so that social media platforms will be alerted.

Third, the “bystander effect” discussed by Gimpel et al. (Citation2021) holds true, that people never or rarely report on fake news because they believe others are present and will do that. Hence, while the presence of negative or oppositional comments attached to a post may cause audiences to recognise it as fake and reduce their desire to spread it (Pang and Ng Citation2017), this study finds that those same comments may also increase the bystander effect for audiences to report it and have it taken down, given their belief that others would already have done so.

Fourth, the power elite on social media continue to yield significant influence in the spread of information. Existing research notes that the more well-known and established the source of the news, the more audiences will believe the information (Tandoc et al. Citation2018); similarly, the fact that a well-known person is the source of the information presents a heuristic cue to audiences that the information is more credible (Swire et al. Citation2017). This study adds to the literature, pointing to the belief of audiences that prominent people are “more honourable” and more likely to “self-regulate” and hence will be less likely to report them. There is also a fear among audiences that their identities might be revealed and they might get into trouble for making those reports. The likelihood for the elites to amass greater symbolic power, and relatedly political and social power, therefore increases.

Fifth, the building of echo chambers is a concern reflected in this study. Scholars like Del Vicario et al. (Citation2016) point to ideological echo chambers where individuals selectively expose themselves to content that they are ideologically aligned with, thereby creating homogeneous clusters. Gimpel et al. (Citation2021) note the presence of confirmation bias, where new information that aligns with existing beliefs are more likely to be taken as true. This suggests an inability for users to ascertain for sure if a piece of information is fake or not, which causes them to not use the report function, allowing misinformation to circulate.

Sixth, the concept of social desirability, discussed by Gimpel et al. (Citation2021) as socially desired behaviour that arise from messages repeatedly prompting people to act a certain way, may be used to describe behaviour in more reserved cultures that prevent the reporting of fake news. There is concern that one may stand out if one should take action, and hence choose to leave the reporting to others instead; a fear that erroneous reporting may get one into trouble is also present. While this has a positive outcome – individuals might be extra careful in verifying information before they make reports – there is greater likelihood that no action is taken and this means the possibility of more misinformation in circulation. This may be mitigated by government watchdogs monitoring online platforms to ensure that fake news gets taken down – Singapore implemented its anti-fake news law in 2019 to do just that (Tham Citation2019) – but concerns have arisen on the power governments then have to regulate information that is allowed to circulate online. In this case, power may be taken out of the hands of the community and into the hands of the state. In instances where the state has had broad consensus from the people to rule and manage the media system, and the people trust that the government is working towards the betterment of society, concerns may pertain more towards the freedom of expression in the public sphere; however, in instances where the state is no longer working in the best interests of the public, this generates crisis perceptions that dissenting voices are being silenced and the citizenry is no longer able to become properly informed and participate effectively in collective decision-making, thereby threatening democracy (Wu Citation2018).

Adding to user hesitation to use the report function is the perceived effectiveness of community reporting as a method to curb fake news. Results for RQ2 indicate that participants are doubtful this is effective, given the seeming lack of awareness and/or use of this function, the uncertainty attached to the impact of this move, and the possibility that those who report the post may themselves be wrong. However, there is agreement that this report function is a good place to start and gives social media users the sense of agency that they can “do something about it”. Ultimately though, the participants believe the onus is still on social media users themselves to not post fake news in the first place.

The above suggest that the community reporting of fake news mechanism will have optimal outcomes only if it is combined with news literacy programmes, and that such programmes must account for “misinformation reporting” and managing its hurdles as a key component. While existing literacy programmes that focus on teaching audiences how to discern factual news from fake news by focusing on ways to understand, find, evaluate and use information (Jones-Jang, Mortensen, and Liu Citation2021) may address the issue of audiences recognising that a piece of news is indeed fake and instil in them better subject knowledge, it is not sufficient to prompt actual reporting of fake news posts to the social media platform. This study suggests the strong need to include other elements within the curriculum, such as to teach audiences how to look beyond the power or authority of the individual posting the content, the reactions of others, the concept of social desirability, and the perceived impact of their reporting, to make objective judgements of the content they see and adopt the necessary action for the sake of public interest, much like how posts containing hate speech are flagged to protect societal well-being. As it is, social media users have already shown their ability to think critically when they consume the news – as this study’s participants indicate through their active decision-making to choose to report or not to report a post – and will stand to benefit from news literacy programmes that can help them become even more engaged in the fight against fake news. Even then, it should be noted that audiences may not always react in normative ways and do what they “should”, highlighting the need for the community reporting of fake news mechanism to always be supplemented with the work of professional fact-checkers and algorithms that scour the Internet for falsehoods.

Some limitations of this study must be noted here. The research method of focus groups is inherently subjective and opinions may be tied to the participants’ personal experiences; the use of snowball sampling also means that results might not be fully representative. The voices quoted from the focus groups might not reflect how strongly other people outside of this study might feel about the topic of fake news (for instance, if they had personally been tricked before and suffered dire consequences). Effort was taken in this study, however, to ensure that a wide range of ages was accounted for, and each group was diverse in ethnicity and gender, so that a variety of opinions and insights could be gathered. As a line of future inquiry, more research on how these factors may vary globally would be useful, given that social media platforms operate transnationally and across cultures. Investigations into user reporting experiences across different social media platforms could also be useful, to enable the development of more targeted recommendations that can prompt such behaviour on different platforms. Finally, the organic nature of focus group discussions made it challenging to comparatively analyse individual experiences e.g., of those who have reported perceived fake news before, versus those who have not, but future studies that investigate these more nuanced experiences could be useful to offer further insights into the effectiveness of community reporting as a means to curb fake news.

Acknowledgements

The author would like to thank the research assistant for this project Nur Laila Bte Jasni for her contributions.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was funded by the National University of Singapore Centre for Trusted Internet and Community Pilot Project Grant (CTIC-PPG-21-05).

References

  • Allcott, H., and M. Gentzkow. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31 (2): 211–236. https://doi.org/10.1257/jep.31.2.211
  • Allen, J., C. Martel, and D. G. Rand. 2022. “Birds of a Feather Don’t Fact-check Each Other: Partisanship and the Evaluation of News in Twitter’s Birdwatch Crowdsourced Fact-checking Program.” In CHI Conference on Human Factors in Computing Systems, 1–19. New York, USA: Association for Computing Machinery. https://doi.org/10.1145/3491102.3502040
  • Apuke, O. D., and B. Omar. 2021. “Fake News and COVID-19: Modelling the Predictor of Fake News Sharing among Social Media Users.” Telematics and Informatics 56: 101475. https://doi.org/10.1016/j.tele.2020.101475
  • Arpan, L. M., and A. A. Raney. 2003. “An Experimental Investigation of News Source and the Hostile Media Effect.” Journalism & Mass Communication Quarterly 80 (2): 265–281. https://doi.org/10.1177/107769900308000203
  • Ashley, S., A. Maksl, and S. Craft. 2013. “Developing a News Media Literacy Scale.” Journalism & Mass Communication Educator 68 (1): 7–21. https://doi.org/10.1177/1077695812469802
  • Balmas, M. 2014. “When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political Attitudes of Inefficacy, Alienation, and Cynicism.” Communication Research 41 (3): 430–454. https://doi.org/10.1177/0093650212453600
  • Barthel, M., A. Mitchell, and J. Holcomb. 2016. Many Americans Believe Fake News is Sowing Confusion. Washington, DC: Pew Research Center. https://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion.
  • Bode, L., and E. K. Vraga. 2021. “Correction Experiences on Social Media during COVID-19.” Social Media + Society 7 (2). https://doi.org/10.1177/20563051211008829
  • Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Cohen, E. L., A. Atwell Seate, S. M. Kromka, A. Sutherland, M. Thomas, K. Skerda, and A. Nicholson. 2020. “To Correct or Not to Correct? Social Identity Threats Increase Willingness to Denounce Fake News through Presumed Media Influence and Hostile Media Perceptions.” Communication Research Reports 37 (5): 263–275. https://doi.org/10.1080/08824096.2020.1841622
  • Curran, J. 2011. Media and Democracy. New York: Routledge.
  • Del Vicario, Michela, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi. 2016. “The Spreading of Misinformation Online.” Proceedings of the National Academy of Sciences 113 (3): 554–559. https://doi.org/10.1073/pnas.1517441113
  • Facebook. 2021. “How Do I Mark a Facebook Post as False News?” Facebook Help Centre. Retrieved August 10, 2023, from https://www.facebook.com/help/572838089565953
  • Festinger, L. 1957. A Theory of Cognitive Dissonance. Redwood City, CA: Stanford University Press.
  • Fisher, C. 2018. “What Is Meant by ‘Trust’ in News Media?.” In Trust in Media and Journalism, edited by K. Otto and A. Köhler, 19–38. Berlin: Springer.
  • Full Fact. 2019. Report on the Facebook Third Party Fact Checking programme. London, UK: Full Fact. https://fullfact.org/media/uploads/tpfc-q1q2-2019.pdf
  • Gilbert, B. 2019. “The 10 Most-viewed Fake Stories on Facebook in 2019 Were Just Revealed In A New Report.” Business Insider, November 7. https://www.businessinsider.com/most-viewed-fake-news-stories-shared-on-facebook-2019-2019-11.
  • Gimpel, H., S. Heger, C. Olenberger, and L. Utz. 2021. “The Effectiveness of Social Norms in Fighting Fake News on Social Media.” Journal of Management Information Systems 38 (1): 196–221. https://doi.org/10.1080/07421222.2021.1870389
  • Glaser, A. 2017. “Apple CEO Tim Cook Says Fake News Is “Killing People’s Minds” and Tech Needs to Launch a Counterattack.” Recode, February 12. http://www.recode. net/2017/2/12/14591522/apple-ceo-tim-cook-tech-launch-campaign-fake-news-fact-check.
  • Gottfried, J., and E. Shearer. 2016. News Use across Social Media Platforms. Washington, DC: Pew Research Center. http://www.journalism.org/2016/05/26/news-use-across- social-media-platforms-2016.
  • Hofseth, A. 2017. Fake News, Propaganda, and Influence Operations – A Guide to Journalism in a New, and More Chaotic Media Environment. Oxford: Reuters Institute for the Study of Journalism.
  • Humprecht, E. 2019. “Where ‘Fake News’ Flourishes: A Comparisons across Four Western Democracies.” Information, Communication and Society 22 (13): 1973–1988. https://doi.org/10.1080/1369118X.2018.1474241
  • Hutchinson, A. 2019. “Instagram Adds New Options to Control Third-party Access to Your account Information.” Social Media Today, October 16. https://www.socialmediatoday.com/news/instagram-adds-new-options-to-control-third-party-access-to-your-account-in/565099/.
  • Jahng, M. R., E. Stoycheff, and A. Rochadiat. 2021. “They Said It’s “Fake”: Effects of Discounting Cues in Online Comments on Information Quality Judgments and Information Authentication.” Mass Communication and Society 24 (4): 527–552. https://doi.org/10.1080/15205436.2020.1870143
  • Jang, S. M., and J. K. Kim. 2018. “Third Person Effects of Fake News: Fake News Regulation and Media Literacy Interventions.” Computers in Human Behavior 80: 295–302. https://doi.org/10.1016/j.chb.2017.11.034
  • Jones-Jang, S. M., T. Mortensen, and J. Liu. 2021. “Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t.” American Behavioral Scientist 65 (2): 371–388. https://doi.org/10.1177/0002764219869406
  • Koo, A. Z. X., M. H. Su, S. Lee, S. Y. Ahn, and H. Rojas. 2021. “What Motivates People to Correct Misinformation? Examining the Effects of Third-person Perceptions and Perceived Norms.” Journal of Broadcasting & Electronic Media 65 (1): 111–134. https://doi.org/10.1080/08838151.2021.1903896
  • Kunst, M., P. Porten-Cheé, M. Emmer, and C. Eilders. 2021. “Do “Good Citizens” Fight Hate Speech Online? Effects of Solidarity Citizenship Norms on User Responses to Hate Comments.” Journal of Information Technology & Politics 18 (3): 258–273. https://doi.org/10.1080/19331681.2020.1871149
  • Luo, M., J. T. Hancock, and D. M. Markowitz. 2022. “Credibility Perceptions and Detection Accuracy of Fake News Headlines on Social Media: Effects of Truth-bias and Endorsement Cues.” Communication Research 49 (2): 171–195. https://doi.org/10.1177/0093650220921321
  • McNair, B. 2018. Fake News: Falsehood, Fabrication and Fiction in Journalism. London: Routledge.
  • Morgan, D. L. 1997. Focus Groups as Qualitative Research. 2nd ed. London: Sage Publications.
  • Nielsen, R. K., and L. Graves. 2017. ‘News You Don’t Believe’: Audience Perspectives on Fake News. Oxford, UK: Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/our-research/news-you-dont-believe-audience-perspectives-fake-news.
  • Nelson, J. L., and H. Taneja. 2018. “The Small, Disloyal Fake News Audience: The Role of Audience Availability in Fake News Consumption.” New Media & Society 20 (10): 3720–3737. https://doi.org/10.1177/1461444818758715
  • Pang, N., and J. Ng. 2017. “Misinformation in a Riot: A Two-step Flow View.” Online Information Review 41 (4): 438–453. https://doi.org/10.1108/OIR-09-2015-0297
  • Papapicco, C., I. Lamanna, and F. D’Errico. 2022. “Adolescents’ Vulnerability to Fake News and to Racial Hoaxes: A Qualitative Analysis on Italian Sample.” Multimodal Technologies and Interaction 6 (3): 20. https://doi.org/10.3390/mti6030020
  • Perez, S. 2022. “Twitter Expands Misinformation Reporting Feature to More International Markets.” San Francisco, USA: TechCrunch. https://tcrn.ch/33NavkF.
  • Porten-Cheé, P., M. Kunst, and M. Emmer. 2020. “Online Civic Intervention: A New Form of Political Participation under Conditions of a Disruptive Online Discourse.” International Journal of Communication 14: 514–534.
  • Reuters Institute. 2020. “Digital News Report 2020.” Retrieved August 10, 2023, from https://www.digitalnewsreport.org/survey/2020/overview-key-findings-2020
  • Rosen, G. 2021. “How We’re Tackling Misinformation across Our Apps.” Meta. Retrieved August 10, 2023, from https://about.fb.com/news/2021/03/how-were-tackling-misinformation-across-our-apps/.
  • Rösner, L., S. Winter, and N. C. Krämer. 2016. “Dangerous Minds? Effects of Uncivil Online Comments on Aggressive Cognitions, Emotions, and Behavior.” Computers in Human Behavior 58: 461–470. https://doi.org/10.1016/j.chb.2016.01.022
  • San Martín, J., F. Drubi, and D. Rodríguez Pérez. 2020. “Uncritical Polarized Groups: The Impact of Spreading Fake News as Fact in Social Networks.” Mathematics and Computers in Simulation 178: 192–206. https://doi.org/10.1016/j.matcom.2020.06.013
  • Scheufele, D. A., and N. M. Krause. 2019. “Science Audiences, Misinformation, and Fake News.” Proceedings of the National Academy of Sciences 116 (16): 7662–7669. https://doi.org/10.1073/pnas.1805871115
  • Shu, K., and H. Liu. 2019. Detecting Fake News on Social Media. San Rafael, CA: Morgan Claypool.
  • Sun, Y., J. Oktavianus, S. Wang, and F. Lu. 2022. “The Role of Influence of Presumed Influence and Anticipated Guilt in Evoking Social Correction of COVID-19 Misinformation.” Health Communication 37 (11): 1368–1377. https://doi.org/10.1080/10410236.2021.1888452
  • Sunstein, C. 2001. Echo Chambers: Bush Vs. Gore, Impeachment, and beyond. Princeton, NJ: Princeton University Press.
  • Swire, B., A. J. Berinsky, S. Lewandowsky, and U. K. Ecker. 2017. “Processing Political Misinformation: Comprehending the Trump Phenomenon.” Royal Society Open Science 4 (3): 160802. https://doi.org/10.1098/rsos.160802
  • Tandoc, E. C., D. Lim, and R. Ling. 2020. “Diffusion of Disinformation: How Social Media Users Respond to Fake News and Why.” Journalism 21 (3): 381–398. https://doi.org/10.1177/1464884919868325
  • Tandoc, E. C., R. Ling, O. Westlund, A. Duffy, D. Goh, and Z. W. Lim. 2018. “Audiences’ Acts of Authentication in the Age of Fake News: A Conceptual Framework.” New Media & Society 20 (8): 2745–2763. https://doi.org/10.1177/1461444817731756
  • Tham, Y. 2019. “Singapore’s Fake News Law to Come into Effect.” The Straits Times, October 2. https://www.straitstimes.com/politics/fake-news-law-to-come-into-effect-oct-2
  • The Global Statistics. 2022. “Singapore Social Media Statistics 2022: Most Popular Platforms.” The Global Statistics. Retrieved August 10, 2023, from https://www.theglobalstatistics.com/singapore-social-media-statistics/#:∼:text=The%20most%20popular%20social%20media,and%2046.6%25%20use%20Facebook%20Messenger.
  • TikTok Safety Centre. n.d. “Covid-19 Resources.” TikTok. Retrieved August 10, 2023, from https://www.tiktok.com/safety/en/covid-19.
  • Tracy, S. 2013. Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact. Malden, MA: Wiley-Blackwell.
  • Trninić, D., A. K. Vukelić, and J. Bokan. 2022. “Perception of “Fake News” and Potentially Manipulative Content in Digital Media—a Generational Approach.” Societies 12 (3): 1–24. https://doi.org/10.3390/soc12010003
  • Tsang, S. J. 2022. “Issue Stance and Perceived Journalistic Motives Explain Divergent Audience Perceptions of Fake News.” Journalism 23 (4): 823–840. https://doi.org/10.1177/1464884920926002
  • Twitter. 2021. “Civic Integrity Misleading Information Policy.” Twitter Help Centre. Retrieved August 10, 2023, https://help.twitter.com/en/rules-and-policies/election-integrity-policy.
  • van der Linden, S., C. Panagopoulos, and J. Roozenbeek. 2020. “You Are Fake News: Political Bias in Perceptions of Fake News.” Media, Culture & Society 42 (3): 460–470. https://doi.org/10.1177/0163443720906992
  • Waddell, T. F. 2018. “What Does the Crowd Think? How Online Comments and Popularity Metrics Affect News Credibility and Issue Importance.” New Media & Society 20 (8): 3068–3083. https://doi.org/10.1177/1461444817742905
  • Wagner, M. C., and P. J. Boczkowski. 2019. “The Reception of Fake News: The Interpretations and Practices That Shape the Consumption of Perceived Misinformation.” Digital Journalism 7 (7): 870–885. https://doi.org/10.1080/21670811.2019.1653208
  • Wilhelm, Claudia, Sven Joeckel, and Isabell Ziegler. 2020. “Reporting Hate Comments: Investigating the Effects of Deviance Characteristics, Neutralization Strategies, and Users’ Moral Orientation.” Communication Research 47 (6): 921–944. https://doi.org/10.1177/0093650219855330
  • Wu, S. 2018. “Uncovering Alternative ‘Journalism Crisis’ Narratives in Singapore and Hong Kong: When State Influences Interact with Western Liberal Ideals in a Changing Media Landscape.” Journalism 19 (9-10): 1291–1307. https://doi.org/10.1177/1464884917753786
  • Zimdars, M., and K. McLeod. 2020. Fake News: Understanding Media and Misinformation in the Digital Age. Cambridge: The MIT Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.