Publication Cover
Journal of Media Ethics
Exploring Questions of Media Morality
Volume 37, 2022 - Issue 2
2,822
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Digital Promotion of Suicide: A Platform-Level Ethical Analysis

ORCID Icon & ORCID Icon
Pages 108-127 | Received 07 Nov 2020, Accepted 17 Mar 2022, Published online: 29 Mar 2022

ABSTRACT

This article utilizes Aristotelian and Kantian philosophies to probe the social responsibilities of internet intermediaries that in one way or another assist and promote suicide. Striking a balance between freedom of expression and social responsibility, it is argued that several actors should be involved in restricting or eliminating live-streaming suicide, sites that encourage and facilitate suicide, and insult forums that drive people, especially adolescents, to take their own lives. The remediating actors are: commercial social media/website owners through their moderators; voluntary, non-profit, NGO “public defenders”; internet platform providers; regulatory agencies based on legislative authority, and advertisers. Practical remedies are suggested for each of these actors, noting as well important exceptions and caveats regarding the respective solutions.

“I think of a hero as someone who understands the degree of responsibility that comes with his freedom.” – Bob Dylan

This article is concerned with highly sensitive issues – suicide encouragement, suicide methods, and suicide information provision – focusing on the responsibilities of social media intermediaries and other internet-related actors in dealing with these cyber-suicide aspects.

One real-life example among many can illustrate the depth of the general problem. In 2008, 19-year-old Abraham Biggs publicly took his own life by overdosing on pills. His suicide was live-streamed on the Justin.tv digital platform. Viewers were able to watch Biggs as he swallowed the pills and subsequently collapsed on his bed unconscious. Hundreds of people watched Biggs dying. Some viewers egged him on to take more pills. Hours passed until someone decided to notify the police who arrived at the scene only to find Biggs already dead (Fox News, Citation2008; Johnson, Citation2008; Thompson, Citation2008; Vieru, Citation2008). Afterward, Justin.tv removed the Biggs video as well as the accompanying chat screen transcript. Justin.tv CEO Michael Seibel voiced his regret, saying that the company had tried to respect the privacy of Biggs and his family. Seibel maintained: “We have policies in place to discourage the distribution of distressing content, and our community monitors the site accordingly. This content was flagged by our community, reviewed, and removed according to our terms of service” (Kravetz, Citation2008). This begs the question as to why suicide streaming was allowed in the first place, and why no restrictive, flagging mechanisms were in place.

Similar events took place on other forums (BBC News, Citation2003; Synnott, Tzani-Pepelasi, & Ioannou, Citation2018). Other internet sites encourage suicide; several sites provide information as to how to end one‘s own life; and there are also “insult forums” in which people and their victims gather to give and receive verbal abuse – berating others in the most derogatory fashion – that sometimes drives troubled people to end their lives (Lazarus & Ryan, Citation2018; News, Citation2006). Several questions arise: What is the rationale for such forums? How do managers of such online forums justify themselves? Do they believe they are providing an important service? How do they articulate that mission to their viewers/members? The article addresses these questions, rebutting the explanations and justifications that moderators of such forums use to condone their conduct. A case in point is Paltalk, an internet provider that promotes suicide and anti-social activities. We examine their rationale and criticize its mode of conduct.

A related question: Why would anyone choose to partake of such forums? Some people have low self-esteem, others are trying to deal with trauma, and there are also people who crave attention, believing that abuse is the only way for them to connect with others (Synnott et al., Citation2018). Another explanation concerns human curiosity and fascination with taboo words: such insult forums free people to enjoy using “forbidden” words that are not legitimate in polite society. Indeed, two-thirds of swear words are linked to personal and interpersonal expressions of anger and frustration (Jay, Citation2000). Thus, given the potential tragic consequences of “suicide speech” (whether by suicidal individuals or by those egging them on), this article addresses the following questions: Should internet companies create, facilitate, or even allow such forums to exist? If yes, under what conditions (i.e., close supervision, regulation, legislation)? For which types of user?

Already in the late 1990s, research studies appeared regarding potential ways the internet might influence suicide (Baume, Cantor, & Rolfe, Citation1997; Thompson, Citation1999). Thompson (Citation2005, p. 278) noted that ethics as an academic discipline and as a concrete practice tended to focus on either the relations between individuals or to social structures as a whole, but not on intermediary associations of which institutions are the most durable and influential. Focusing attention on the role of internet intermediaries is both timely and necessary.

To many readers, there would not seem to be a dilemma here as one could easily consider such active encouragement of suicide to be clearly morally impermissible. Nevertheless, at least two dilemmas arise. First, there are instances where suicide is permissible (by law), and thus one can debate whether any medium should encourage such suicide or merely inform the public about the possibility. Second, any regulation and certainly proscription of suicide encouragement could potentially be used as a cudgel to restrict the legitimate power of the traditional press (print and electronic) as well. This article addresses both issues.

Section II briefly surveys the concepts of moral and social responsibility, as ethical reasoning is essential to building norms for ethical practice by internet intermediaries – just as it has been essential to developing ethical norms for journalistic institutions (Johnson, Citation2017; McQuail, Citation2003; Plaisance, Citation2005, Citation2007). Aristotelian and Kantian ethics constitute the basis for such norm development, highlighting the concepts of personal dignity as well as moral and social responsibility.

Section III discusses social networking and suicide, while Section IV analyzes the responsibilities of internet intermediaries – private companies that provide digital platforms and facilitate the use of communications and knowledge, such as Internet Service Providers (ISPs), internet Web Services, search engines, and social media. Finally, Section V offers a nuanced discussion of possible remedies, especially regarding sites that encourage depressed, distressed and temporarily vulnerable people to end their lives.

The ethics of moral and social responsibility

Aristotle was the first philosopher to highlight the importance of moral responsibility (Nicomachean Ethics III.1–5, 1962), stating that it is appropriate to praise and blame people on the basis of their actions. However, it is important to distinguish between voluntary and coerced acts (Citation350 BCE, Book V). Thus, a “decision” can only result from free deliberation that expresses the agent’s conception of what is good (1962, 1111b15-1113b22): “justice is that in virtue of which is the just man is said to be a doer, by choice, of that which is just” (Aristotle, Citation350 BCE, Book V). Aristotle emphasized that people are responsible for their conduct, identifying the conditions under which one can hold a moral agent blameworthy or praiseworthy for some particular action. We do not act rightly because we have virtue or excellence, but we rather have those because we have acted rightly. Thus, by moral responsibility he meant that autonomous agents understand the options before them, have access to evidence required for making judgments about the benefits and hazards of each option, and are able to weigh the relative value of the consequences of their choice.

All cases of moral responsibility for bad actions involve acting against one’s better judgment, termed akrasia (Erginel, Citation2016; FitzPatrick, Citation2008; Lawrence, Citation1988). If agents do something bad, either they do so in full knowledge that they should not be doing it, which is clear-eyed akrasia, or they are acting from ignorance. In the former cases they will be held responsible. In the latter case, their responsibility depends on whether their ignorance is culpable, generally if they were blameworthy for some earlier akrastic failure that gave rise to that ignorance (Cohen-Almagor, Citation2015; FitzPatrick, Citation2008; Robichaud & Wieland, Citation2017).

Morality dictates respect for human life. In this there are several important, underlying premises. First, human life is precious: what Kant called dignity, intrinsic value (Würde) denoting a value that has no equivalent, beyond any price tag. Kant (Citation1997) explained that things that are an end in themselves have not merely a relative worth but an inner worth, that is, dignity. Kant (Citation1969) identified dignity with moral capacity: morality, humanity and dignity are interconnected values; human beings are “objective ends.” All people should be respected qua being persons. Dignity is not an attribute of one’s gender, race, religion, culture, class or any other characteristic, but rather universal.

Second, almost all people value living life and perceiving it as something intrinsically important, important to fight for. Therefore, as a society we should strive to render the appreciation of life the default option, not easily foregone. When people end their own life, the usual sense of loss is amplified because life could potentially have been saved with positivity, compassion and care.

Third, personal dignity has consequences, requiring us to take responsibility for our actions. As Dworkin (Citation2011, pp. 210–211) suggests, the concept of dignity needs to be associated with the responsibility that each person must take for her own life, regarding both herself and others. “The buck stops here,” writes Dworkin.

Such personal responsibility leads to the fourth principle, following Kant (Citation1997): one has a duty to preserve one’s own life, as this is consistent with the idea of humanity. A human being is not merely a means to an end. All the more that people cannot dispose of others by hurting them or killing them – or in the issue before us, serving as an accomplice to their killing themselves. As Viktor Frankl (Citation2004) testified, it is possible to find meaning in life even in the abyss. Frankl, who survived Nazi concentration and death camps, also believed that for life a person can only respond by being responsible. Whether one is the actor or the passive receiver, responsibility should not be avoided. It must be borne. For a man who becomes conscious of the responsibility he bears toward another human being will never be able to throw away his life. He knows the “why” for his existence and will be able to bear almost any “how” (Frankl, Citation2004).

Social networking and suicide

For 4.6. billion people, about 60 percent of the world’s population (Internet Usage Statistics, Citation2020), the internet is a vital and indispensable part of their life. Internet companies that enable and support contemporary communication are more powerful than ever before; but such power should be accompanied by great responsibility (Curran & Seaton, Citation2018; Mac Síthigh, Citation2019; Tushnet, Citation2008; Zingales, Citation2013).

Social responsibility refers to the duty of individuals, groups, institutions, corporations, and governments to improve societal living conditions and refrain from knowingly causing harm. Such responsibility is ethical in nature: out of consideration for others we take active steps to do good, improve social wellbeing, foster human rights, and avoid harm (Christians, Citation2019; Taddeo & Floridi, Citation2016). This includes both the private and public sectors supporting others when they are in danger.

Regarding public, for-profit institutions, common types of Corporate Social Responsibility (CSR) initiatives include corporate contributions or philanthropy, employee volunteerism, community relations, becoming an outstanding employer for specific disadvantaged groups, and prioritizing environmental considerations. CSR initiatives evince good corporate citizenship, strong ethical practices, and ecologically sustainable business practices both on and offline (Abend, Citation2014; Ihlen, Bartlett, & May, Citation2011; Kolb, Citation2018; Luetge, Citation2017; Novak, Citation1996; Sena Gawu & Inusah, Citation2019; Stoll, Citation2006). Corporations can (some would argue should) have a business and moral conscience as well as an ethical compass that guides them to act in a socially responsible fashion while also respecting personal dignity (Goodpaster & Matthews, Citation1982; Ruggie, Citation2013; Tripathy & Itishri, Citation2017). A responsible society provides its citizens with basic social and economic security, community, social inclusion, and avenues to advance and promote individual capabilities (Abbott, Wallace, & Sapsford, Citation2016; Margalit, Citation1998). Liberal societies accept the principles of not harming others based on justice, protecting human rights, and ensuring that known, transparent, and fair rules of law prevail (Ackerman, Citation1980; Cohen-Almagor, Citation2021; Feinberg, Citation1980; Rawls, Citation1971). Societies that provide these conditions enable individual empowerment. Suicide undercuts such empowerment.

Though liberal democracies do not penalize people who survive suicide attempts or who contemplate suicide, they should not enable nor allow suicide encouragement, but rather provide to one extent or another psychological support to those who struggle. However, liberal societies also have other high-level values, occasionally in conflict with human dignity and life preservation. A foundational principle is freedom of expression that can clash with personal human dignity and challenge social responsibility. As part of this complex realm, internet companies are interested in people using their services, but this cuts two ways: on the one hand, securing profit and, on the other hand, acting with their customers’ health and safety in mind. Thus, one would think that they would not provide platforms for suicide idolization, but in reality they do, perhaps over-prioritizing the short-term profit motive. In large part, this is because customers have the ability to “exit” if not satisfied with the services offered (Hirschman, Citation1970). Caught in this dilemma, some internet intermediaries enable and even facilitate “popular” suicide forums in which people who contemplate suicide receive information as to how to do it, are encouraged by others to continue with their suicide plans, are able to organize suicide pacts, and finally are egged on to actually depart life (David-Ferdon & Feldman Hertz, Citation2007; Warf, Citation2018).

Here one must distinguish between two broad categories of suicide planners: 1) the overly emotional, temporarily despondent individual; and 2) patients at the end of life who are able to make a rational decision. There are spirited debates regarding the legitimacy of suicide for the second type (Jackson & Keown, Citation2012). The present discussion will focus on the first category.

Suicide platforms provide victims of bullying and cyberbullying with information about how to end life. Because bullying and cyberbullying are a major challenge (Hinduja & Patchin, Citation2009; Kowalski, Limber, & Agatston, Citation2008; Navarro, Larrañaga, & Yubero, Citation2018), and because many bullied teenagers consider suicide and some act to end their lives (Bertolotti & Magnani, Citation2013; Cava, Tomás, Buelga, & Carrascosa, Citation2020; Cohen-Almagor, Citation2018; Gerson & Rappaport, Citation2011; Hinduja & Patchin, Citation2019; John et al., Citation2018; Kwan et al., Citation2020; Livingstone, Haddon, Görzig, & Ólafsson, Citation2011; Livingstone, Haddon, & Görzig, Citation2012; McMahon, Reulbach, Keeley, Perry, & Arensman, Citation2010; Megan Meier Foundation, Citationn.d.; Williams & Guerra, Citation2007), it is morally questionable and socially irresponsible to provide teenagers (especially) with platforms where they can exchange views on suicide, urge them to follow suicidal thoughts and idolize the act of suicide (Chang, Xing, Tin Hung Ho, & Siu Fai Yip, Citation2019).

Liberal democracy has an obligation to protect vulnerable third parties, especially children and adolescents. Research by Mars et al. (Citation2015) surveying young adults found that a troubling 22.5 percent reported self-harm and suicide-related internet use, including 7.5 percent who searched for suicide information. Of those who had actually harmed themselves with suicidal intent, 70 percent reported suicide-related internet use.

However, there should be limits regarding encouraging adults as well. In Japan, for example, websites offer information on suicide and its methods (Hagihara, Tarumi, & Abe, Citation2007) including “exit bags” (a do-it-yourself suicide kit). One site called on people to “save the planet, kill yourself.”Footnote1 It advised people to “do a good job” when they choose suicide, saying: “Suicide is hard work. It’s easy to do it badly or make rookie mistakes. As with many things, the best results are achieved by thorough research and careful preparation.”Footnote2 Another site demonstrated various methods of suicide including lethal doses of poison, their availability, estimated time of dying, and degrees of certainty (Bever, Citation2019; Malamuth, Linz, & Yao, Citation2005).

Chatrooms and discussion forums may also pose a risk for vulnerable people by raising the option and then influencing decisions to die by suicide. Some people have reported being encouraged on suicide web forums to use suicide as a way to solve their problems (Biddle, Donovan, Hawton, Kapur, & Gunnell, Citation2008). Such conversations can also foster peer pressure to suicide, encourage suicide idolization, or facilitate suicide pacts: an agreement between two or more people to die by suicide at a particular time and often by the same lethal means (Rajagopal, Citation2004). These interactions can reduce people’s doubts or fears when they are ambivalent about suicide (Luxton, June, & Fairall, Citation2012). To counteract this tendency, the United Kingdom’s Coroners and Justice Act 2009 amended the Suicide Act 1961 to consolidate and simplify previous legislation and to clarify that the law applies to online actions in exactly the same way as it does offline (Criminal Law Policy Unit, Ministry of Justice, Citation2010). Under section 2(1) of the 1961 Act, it is an offense to encourage or assist the suicide or attempted suicide of another person. The offense does not require the person to know the other person or identify them. Crown Prosecution Guidance states that: “In the context of websites which promote suicide, the suspect may commit the offence of encouraging or assisting suicide if he or she intends that one or more of his or her readers will attempt to commit suicide” (Crown Prosecution Service, (Citation2014)). A later 2015 report noted the “limited systematic evidence” on the influence of social media on self-harm and suicidal behavior (HMG, Citation2015). Still, the report mentions research that shows the internet creating channels of communication that can be misused to cyberbully peers, correlated with increased risk of self-harm, suicidal ideation, and depression.

Correlations have also been found between internet exposure and violent methods of self-harm (Daine et al., Citation2013). In this context, it is worth mentioning Glenn Hughes, 39, who was treated for depression and had surfed websites that discuss and promote suicide. Hughes had obtained a videotape from the internet which demonstrated the method he chose to kill himself. His brother said: “If my brother hadn’t gone onto the internet I think that he wouldn‘t have been so successful in what he tried to do” (News, Citation2006). In turn, 43-year-old Leon Jenkins took his own life in July 2018 while livestreaming his suicide on an internet forum called Paltalk where users could freely – and viciously – insult, berate, provoke and abuse each other. Paltalk was linked to two other suicides (Lazarus & Ryan, Citation2018).

Biddle et al. (Citation2016) published research that investigated changes between 2007 and 2014 in material likely to be accessed by suicidal individuals searching for methods of suicide. The study showed a clear trajectory: constant growth of suicide blogs and discussion forums (from 3 percent of hits in 2007 to 18.5 percent of hits in 2014); an increase in hits linking to factual sites that detail and evaluate different methods of suicide (from 9 percent in 2007 to 21.7 percent in 2014). Hits for dedicated suicide sites increased from 19 percent to 23 percent, while formal help sites were less visible (decreasing from 13 percent to 6.5 percent). Overall, 54 percent of hits provided data on new, high-lethality methods.

Other studies clearly show the consequences of such conduct. Mitchell, Wells, Priebe, and Ybarra (Citation2014) found that youth who were exposed to websites that encourage self-harm or suicide were seven times more likely to say they had thought about killing themselves; and eleven times more likely to think about hurting themselves, even after adjusting for several known risk factors for thoughts of self-harm and suicide. Cases of cyber-suicide (i.e., attempted or successful suicides influenced by the internet) have been documented for a long time (Beatson, Hosty, & Smith, Citation2000; Biddle et al., Citation2008; Cubby, Citation2007; Thompson, Citation1999). In Britain alone, between 2001–2008 there were at least seventeen deaths involving chatrooms or sites that provide advice on suicide methods (Harvey, Citation2008).

A case in point: William Melchert-Dinkel, a licensed nurse, was convicted in 2011 on two counts for encouraging people to take their own lives, one in Britain (2005) and another in Canada (2008). He was very active in internet forums for group suicides, where people can meet to arrange their collective death. Melchert-Dinkel admitted to encouraging dozens of people on suicide websites to kill themselves, often by falsely entering into suicide pacts with them (Vitelli, Citation2013). Such clear akrasia by forum moderators who fail to acknowledge the dignity of the person, denying any moral and social responsibility, is highly problematic to say the least. Paul Kelly of the Papyrus charity, which works to prevent suicide in young people, said: “Some of these sites which incite or give advice on suicide are horrifying. They are encouraging vulnerable people to take their own lives” (Ungoed-Thomas, Citation2007).

A British study conducted in 2014–2016 with young people in the community and self-harm patients in hospital emergency departments explored the suicide-related, online behavior of samples of distressed users, inquiring into their purpose and the online content they chose to view. Among these young people, internet browsing was disorganized and lacked clear purpose. They “stumbled upon” various data including suicide methods. Users also pursued opportunities to interact with others and explore online help. On the other hand, self-harm patients with a history of suicidal behavior browsed the internet with a sense of purpose. They were strategically looking for suicide methods to maximize effectiveness. They consulted factual content and did not seek information about psychological help and support (Biddle, Derges, Goldsmith, Donovan, & Gunnell, Citation2018). The researchers concluded that further action is necessary to improve online safety. They recommended novel online help approaches to engage individuals experiencing a suicidal crisis. Undoubtedly, awareness of the nature of suicide-related internet use and how this may reflect an individual’s suicidal thinking could be beneficial to clinicians seeking to promote safety and indicate risk (Biddle et al., Citation2018).

The case of Shawn Shatto is illustrative, not only for the absence of clinician intervention but even more so for the site moderator’s lack of proactive intervention. In April 2019, 25-year-old Shatto joined an online forum that claimed to help people “discuss mental illness and suicide from the perspective of suicidal people.” By May 2019 Shatto, who struggled with severe depression and anxiety, was dead. Shatto’s family argued that the internet forum coached her how to die. The forum managers argued that the information provided was for “educational purposes only,” and has a disclaimer which reads: “This is a pro-choice forum, not a pro-suicide forum. We are not a pro-suicide site, nor do we encourage anyone here to commit suicide … We are not responsible for what you do with that information” (Davis, Citation2019).

The managers and moderators of this and similar forums offer what they call “neutral space” to discuss the topic of suicide without censorship (Newsbeat, Citation2020). However, this excuse is weak because the space is frequented by individuals who push vulnerable people to take their own life. Promoting the idea of suicide is not neutral. It takes a stand, legitimizing self-destructive conduct. Moderators of such forums fail to acknowledge how they contribute to the problem by denying that their platforms in effect encourage some people to end their life. As far as they are concerned, only the user has the responsibility to decide what to do with the information provided about suicide, including detailed methods as to how to go about doing it “efficiently.”

In our view, managers and moderators of suicide forums are akrastic people who lack what Aristotle considered good will in Book III of Nicomachean Ethics (Aristotle, Citation1962). Choice is important in having desirable ends and the relevant means to pursue those ends (Ibid., 1111b15-1113b22). Aristotle said that one is an apt candidate for praise or blame if and only if the action and/or disposition is voluntary, and a voluntary action is decided by the agent who exercises his free will, after becoming aware of what it is he wishes to do (Ibid., 1110a-1111b4).

Thus, we understand the internet intermediary’s moral responsibility to mean that as autonomous agents they understand the options before them, have access to evidence required for making judgments about the benefits and hazards of each option, and are able to weigh the relative value of the consequences of their choice. They comprehend causes for action, and are able to appreciate likely consequences of any course of action. In this context, the idea of conscientiousness is relevant. Responsible internet agents take upon themselves duties and responsibilities, intending to pursue positive goals.

Though Aristotle spoke about individuals, his approach can also be applied to businesses and organizations. As Aristotle put it, one cannot claim to be responsible only for noble acts while others are responsible for base acts – precisely what the above forum managers claimed in order to avoid responsibility for the consequences of their lack of supervision. Indeed, internet intermediaries who promote or even “merely” facilitate suicide fail to acknowledge the dignity of the person and what respect for human life entails. They do not see it as their duty to preserve life; thus, contra Kant, their attitude to life is callous and morally questionable. Per Aristotle, those managers and moderators are blameworthy as their action is voluntary. They are not compelled externally, and they are aware of what they are doing and causing. Aristotle regarded people who knowingly choose to act irresponsibly as unjust and vicious.

Furthermore, Kantian philosophy is of utmost relevance as guidance. In Groundwork of the Metaphysics of Morals (1997, chapter 2), Kant argued persuasively that suicide cannot be reconciled with the idea of humanity as an end in itself. If individuals consider escaping their circumstance by killing themselves, they are using personhood merely as a means to survive until the end of life. “But a man is not a thing [Sache], so he isn‘t something to be used merely as a means, and must always be regarded in all his actions as an end in himself. So I can’t dispose of a man by maiming, damaging or killing him – and that includes the case where the man is myself.” Specifically for our purposes here, internet intermediaries should not actively assist people to destroy themselves.

Responsibilities of internet intermediaries

According to Kant (Citation1997), people who act with a sense of moral and social responsibility are beings who are ends in themselves in an elevated sense. It is a “morally good disposition” that makes a rational being “fit to be a member of a possible kingdom of ends” (Kant, Citation1997). Persons are ends in themselves only to the extent that they follow moral law, giving resonance to the passage that humanity has dignity insofar as it is capable of morality. Individuals who do not respect others as persons with equal standing under moral law and who abuse their power to exploit and undermine others force us to devise appropriate mechanisms against the anti-social challenges they pose.

Of course, responsibility for internet usage falls also on the users. However, insult forums (among other, similar platforms) show that one cannot solely rely on the individual’s behavior, especially when they are emotionally handicapped. Thus, responsibility also (and perhaps primarily) extends to internet intermediaries that provide such platforms and facilitate suicide discussions. These companies possess immense power, but as noted above power also demands social responsibility, especially when human life is at stake.

The internet has evolved from American Advanced Research Projects Agency (ARPA) funding in the 1960s into a global complex network of networks that affects all walks of life (Cerf & Kahn, Citation1974; Cohen-Almagor, Citation2011; Kleinrock, Citation2008; Salus, Citation1995). Digital platforms invite and encourage impressive technological innovations that, to a large extent, assist humanity. However, the western world has been slow to devise ways to fight internet abuse, leaving much responsibility to the internet’s corporate intermediaries. Owners and managers of internet platforms have discretion as to whether their services are open to all or limited in one way or another, but invariably the default position is “open” and (virtually) unrestricted in content – for economic reasons.

For instance, Paltalk – priding itself with 100 million downloadsFootnote3 – provided insult chatrooms and was linked to several suicides (Leon Jenkins, 43, Gregory Tomkins, 39, and Kevin Whitrick, 42), who ended their lives following insults and abuse on Paltalk chatrooms (Lazarus & Ryan, Citation2018). The abusers seem to get perverse satisfaction from offending others, and this exploitation of human weakness seems mainly to increase the site’s popularity and profit. Paltalk attempts to justify its relatively laissez-faire approach to abusive user content. After the Jenkins suicide, its spokesperson said: “We operate a social platform with communities that are user-moderated, and the company is currently investigating the circumstances surrounding the incident … We have closed the chatroom, and will apply other corrective measures, including terminating the accounts of individuals who violated our terms of service” (Lazarus, Citation2018).

However, Paltalk managers knew what was happening on their platform but didn’t think that they had a duty of care regarding their vulnerable users. Even in July 2020, the Verbal Abuse Insults Room, self-described as “This Is A Verbal Abuse Room Where The Weak Will Not Survive,” was still active on Paltalk.Footnote4 Indeed, a Paltalk advertisement urged potential users to “Join Verbal Abuse Insults Fight Room + 5000 more chat rooms” for free.Footnote5 This statement was later deleted; Paltalk’s most recent version of its code of conduct (September2021) now states: “Paltalk does not allow group titles that single out specific individuals, groups, organizations, corporations, races, religions etc … for the purpose of degrading or otherwise disparaging them.”Footnote6

Nevertheless, there is a wide gap between the Code and real-life experience. Paltalk has a consumer rating of 1.75 out of a maximum 5 stars. Consumers complain about frequent chat rooms problems. One reviewer wrote: “Paltalk fails members because it does not care about cyberbullying or cyberstalking on its platform … Beware of toxic people on this program ready to attack your person in any way they are able to for their entertainment.”Footnote7 Another customer advised: “Be careful with the rooms you’re visiting. There are a small handful of fairly decent rooms, but this place is ripe [sic] with people who are only online just to insult you.”Footnote8 One review is titled “A place for criminals, gang members and other degenerates to hang out in,” while another user warned: “Most of the chat rooms consist of the lowest trolls in society. So if you want to get insulted and ridiculed join pal talk.”Footnote9

Paltalk encourages people to devalue their existence and question whether they have sufficient reasons to live. While Paltalk managers surely realize that the consequences of such abusive speech might lead to suicide, it appears that to make a profit they still allow it to continue while absolving themselves of any responsibility.Footnote10 Paltalk’s terms of service declare in bold, capital letters that the company does not bear responsibility for any kind of “damage.”Footnote11 Thus, Paltalk managers are morally (even if not legally) responsible for deaths that could have been avoided.

To be sure, many other internet intermediaries do adopt some form of morally and socially responsible policy and action, opting for types of self-regulation through codes of practice. Facebook, for instance, acknowledges that social media posts have become the new suicide letter. People say their final goodbye on Facebook, as was the case of Simone Back, 42, who posted a suicide message on Facebook before she took her own life (McVeigh, Citation2011).

In 2017, Facebook started to use machine learning to identify possible suicide or self-harm and to mobilize timely help to people in need. Technology flags certain phrases in posts and comments that suggest contemplation of suicide and self-harm (Facebook, Citation2022). Facebook technology that identifies possible suicide and self-injury statements is also integrated into Facebook Live (Facebook, Citation2022). People who are watching alarming videos can reach out to the person directly or report the video to Facebook. In grave cases, where Facebook Community Operations team is concerned about imminent danger of self-harm, the company may swiftly contact emergency services to conduct a wellness check (Facebook, Citation2022). In 2019, Facebook released new algorithm-based tools for helping individuals who are at risk of taking their own life, enabling friends, family members and strangers to directly reach out to the person who is considering suicide – or to report concerns directly to Facebook (Ashraf, Citation2019). Facebook should establish a dedicated team that would be attentive to users’ warnings that its platform might be used to livestream suicide and also swiftly remove suicide videos that are most distressing, especially for the families concerned (Nikolic, Citation2020; Warnock, Citation2020).

Such measures are important because they can save life. Depression is often transient. Many people are able to pass that dark phase and renew interest in living. After all, contemplating suicide and attempting suicide are not the same thing. The period between contemplating and acting is critical. At those crucial junctions, people should not be pushed to do something irreversible. Research shows that computer-based patient support such as the Comprehensive Health Enhancement Support System can greatly benefit patients. This system provides information and helps users make informed decisions. The system also increases patients’ participation and enables them to have greater control over their own health care (Nichols, Citation2018). Effective communication can provide meaningful input in improving health and in saving lives (Albrecht & Goldsmith, Citation2003; Rimal & Lapinski, Citation2009). But when people in need are encouraged to die, this kind of communication can result in despair and termination of life.

Consider Callie Lewis who was diagnosed with Asperger’s syndrome at a young age, struggling with chronic depression and suicidal thoughts. She was a 24-year-old when she took her own life after a painful journey whose last stop included chats with strangers in online suicide forums. Callie’s family argues that she became “engrossed” in suicide websites which were “encouraging her how to do it” (Newsbeat, Citation2020). In the last months of her life, Callie stopped communicating with her family and friends; instead, she communicated with suicide forum users. The strangers on internet forums did not offer her hope and support in rekindling the zeal for life; instead, those superficial “advisers” encouraged her to take her own life: “Good luck. We all wish you a swift travel,” wrote one. “May you find peace, my friend,” said another (Newsbeat, Citation2020).

Vulnerable people access both harmful and helpful sites. Research shows that the internet presents potential risks but also offers opportunities for suicide prevention (Biddle et al., Citation2008; Mars et al., Citation2015). Internet companies should prioritize help, care and support for this needy population who are seeking succor – and develop techniques to prevent the dissemination of information and support for suicide.

Remedies

Any effective response requires the active participation, cooperation, and resource investment by technology companies, governments, and civil society as partners with a shared interest in combating cyber-suicide activities. However, first and foremost, the responsibility for ameliorating the problem lies with the social media gatekeepers. Unfortunately, until now many such intermediaries have not considered it incumbent for them to do all in their power to engage in suicide prevention. Many of these major social discourse intermediaries are American companies that operate under two important and powerful shields: The First Amendment, and Section 230 of the Telecommunications Act (Citation1996). Both are open to constructive use but also to harmful abuse (Saunders, Citation2003). Section 230 states that online platforms are not responsible for material that their users post online: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Undoubtedly, Section 230 facilitated and enabled many positive online innovations (Lipinski, Buchanan, & Britz, Citation2002; Reidenberg, Debelak, Kovnot, & Miao, Citation2012; Yannopoulos, Citation2017). The protection of Section 230 brought about the rise of mega social platforms such as Facebook, but at the same time enabled an unprecedented amount of anti-social and dangerous behavior, precisely because of Section 230ʹs ability to immunize platforms owners from any content liability. Thus, many managers of internet companies look only at the bottom line: more speech, any speech, is better for business. As Citron (Citation2021) notes, all that companies have had to think about until now is the optimization of ad revenue, without bearing legal responsibility for the harm that their relentless drive for increasing revenues caused to public health and individuals, such as websites that enable, encourage, and even promote suicide. However, as Gillespie (Citation2018) correctly argues: “ … the discussion about content moderation needs to shift … to a more expansive examination of the responsibilities of platforms, that moves beyond their legal liability to consider their greater obligations to the public” (p. 198).

The following remedial recommendations should be prefaced by some nuanced distinctions. As noted above, the issue of internet-related suicide encompasses a wide spectrum of activity. Thus, there is no one-size-fits-all remedy, and any practical policy application must not only take these distinctions into account, but also make clear to the public what sort of activities each platform’s policy tolerates and what it does not. These distinctions do not contradict each other but rather are complementary. First, there is an obvious difference between a site or forum that addresses patients at the end of life (i.e., issuing a disclaimer that the information is intended exclusively for such patients), as opposed to one that targets temporarily, emotionally distressed individuals. Several countries legally enable “physician-approved” and/or assisted suicide under certain conditions (e.g. The Netherlands, Belgium, Luxembourg, Switzerland, Canada and a growing minority of states of the United States) (Cohen-Almagor, Citation2004; Jones, Gastmans, & MacKellar, Citation2017; Sperling, Citation2019). In other words, one should differentiate between first-party “intent” and second-party “encouragement.” The former involves the “pull” of those with a bona fide need, seeking official, authorized suicide information; the latter relates to those in temporary distress who are “pushed” by non-professionals to consider unauthorized suicide. Whether or not one approves of legal suicide, the first type of platform works with an official imprimatur that provides legal, and to a large extent moral, cover to make such information available for those (within the parameters of the law) who pro-actively search for it. Second, and occasionally related to the first point, is the difference between sites that simply provide information in neutral fashion and those that actively encourage suicide – through forum “discussions” or other means (e.g., links to sites selling guns, home-making poisons etc.). Third, there exists a clear difference between sites addressed exclusively (or mainly) to adults, and those geared to adolescents or children. Earlier studies that probed media coverage of sensitive issues, such as suicide, highlighted the susceptibility of younger people to copycat activity. Sociologists who conducted independent studies of suicide patterns found significant copycat correlations. Mass media reports of teenage suicide appeared to lead to other teenage suicides (Phillips & Carstensen, Citation1986, Citation1988; Russell, Citation1995).

As there is reduced legal “responsibility” for any pre-adult behavior the law should provide greater protection for them regarding suicide – just as it does with pornography and other activities that are particularly harmful to vulnerable adolescents. Given the potential dire consequence of teenagers accessing such sites, any legislation regarding suicide sites should require not only “disclaimers” but perhaps technological filters as well (e.g., AI-assisted language level identification, camera shot access, and other solutions). Indeed, technological advances now enable algorithmic (AI) emotion detection of text (Acheampong, Wenyu, & Nunoo-Mensah, Citation2020), and in the near future, voice-based, emotion recognition (Wadhwa, Gupta, & Pandey, Citation2020). Fourth, the degree of preventive “burden” placed on internet sites/forums should be proportional to the corporate size of their owners, and/or the number of users of any specific social media platform (Gillespie, Citation2018). For instance, we can demand of Facebook to spend as much resources as needed to ameliorate any “suicide” issues within its domain (including Instagram and WhatsApp). Much smaller companies could be required to apply techniques in graded fashion depending on their revenue, so as not to overly restrict internet-oriented innovation. Fifth, except perhaps for sites geared exclusively to patients at the end-of-life, a distinction should be made between sites that use advertising “pull” or “push” techniques and sites that offer neutral information. Commercial speech always has had less “constitutional” protection than political or non-economic speech (Brudney, Citation2012; Post, Citation2000; Shiner, Citation2003). Thus, search engines could be prohibited from accepting advertisements for certain (or all) suicide-related forums/sites, and certainly should have its algorithm give low priority to sites that actively promote suicide.

A related question concerns the identity of who will initiate and then supervise/execute the solutions. Certain remedies will have only one “parent”; others will combine two or more. Broadly speaking, in roughly descending order these are: 1) corporate media through their internal moderators/managers; 2) voluntary, non-profit, NGO “public defenders”; 3) internet providers (e.g., search engines, servers, domain providers, website developers); 4) governmental regulators based on legislative authority; 5) advertisers and other financial supporters.

For reasons of maintaining as much of an open internet as possible, initial solutions should be sought, and responsibility placed on, the owners and administrators of the “suicide sites” – watched over and pressured by the public defenders. If necessary, an additional line of defense would involve the basic internet providers. Only when all these do not manage to significantly reduce internet-induced suicide, should the government then enter the picture directly through legislation and/or granting increased regulatory powers to relevant agencies (e.g., the Federal Communications Commission in the United States).

The following survey of possible remedial actions are not overly specific (although examples are provided) but rather deal in relatively broad principles, certainly regarding legislation and regulation. As the Committee for Economic Development (Citation2017) report notes regarding principles-based regulatory strategy: “Regulations are more likely to promote the public interest, even if they stay on the books for a long time … if they are based on broad principles rather than narrow rules. Broad economic principles last forever, but narrow legal rules can become stale over time.” As internet intermediaries are on the front-line of the phenomenon, and have the expertise and resources, their role is primary in addressing this tragic problem. Facebook’s Opt In Suicide Prevention Tool (https://www.facebook.com/suicidepreventiontool/about?ref=page_internal) is an example of what can be done, even as a preventive measure. It is possible to devise further appropriate means to fight the ills of the internet (Fackler & Fortner, Citation2011; Goldsmith & Wu, Citation2006; Roessler, Hoffner, & van Zoonen, Citation2017; Ward, Citation2013). For example, internet intermediaries could establish integrity teams, instructing moderators to take off inappropriate content. Sites/forums should have easily identifiable and accessible hotlines to enable internet users to report individuals promoting or encouraging suicide, especially on those sites/forums that attract youth (Busby et al., Citation2020). Furthermore, platforms should take steps to facilitate and encourage the reporting of such harmful material (even if the initiating source cannot be determined).

Twitter and TikTok

Twitter and TikTok are useful examples of such an approach. Twitter guidelines (2021) instruct: “You may not promote or encourage suicide or self-harm.” Under this policy, promotion and encouragement of suicide include statements such as “the most effective,” “the easiest,” “the best,” “the most successful,” “you should,” “why don’t you.” Violations of this policy can occur via tweets, images or videos, live or taped. Twitter warns that violations of this policy include but are not limited to: encouraging someone to physically harm or kill themself; asking others for encouragement to engage in self-harm or suicide, including seeking partners for group suicides or suicide games; and sharing information, strategies, methods or instructions that would assist people to engage in self-harm and suicide. If this policy is violated, Twitter requires users to promptly remove such harmful content, and will temporarily lock the users out of their accounts before they can tweet again. If users continue to violate this policy, or if it is found that a certain account is dedicated to promoting or encouraging self-harm or suicide, it is then permanently suspended. Twitter (Citation2021) has also taken steps to prevent the spread of instructional material hosted on third-party websites by marking such links as unsafe.

In September 2021, TikTok announced on its platform that it has established a set of features to help users who are struggling with mental health issues and who are contemplating suicide. The features include guides on well-being and support for people who are struggling specifically with eating disorders. TikTok also established a search intervention feature that directs users to support resources if they search the word “suicide” (BBC, Citation2021).

Thus far, although companies in several countries (U.S., Germany, South Korea, and China) have exercised some responsibility in dealing with generally harmful online communication (Einwiller & Kim, Citation2020), continued calls for self-moderation regarding cyber-suicide have failed to convince many other internet companies to take appropriate, pro-active measures to ensure users’ security and safety.

Artificial intelligence (AI) and search engine algorithms

As mentioned above, another approach to self-regulation is for social media sites to incorporate artificially intelligent, text-parsing (in the future, voice as well) to identify such suicide-related materials, either automatically and expeditiously removing them, or at least reporting the suspected content to human moderators. Although AI is not foolproof, it could be programmed to automatically delete certain terms (e.g., “want to die”; “kill myself”), and report to moderators most borderline, ambiguous, or suspicious texts (e.g., “poison”; “depressed”). As Gillespie (Citation2018, p. 206) notes, while “these platforms now function at a scale and under a set of expectations that increasingly demands automation … the kinds of decisions that platforms must make, especially in content moderation, are precisely the kinds of decisions that should not be automated, and perhaps cannot be.” Of course, this entails human resource investment, as noted above.

Regarding third-party internet providers, one cannot place too great a “censorship” burden on them. Nevertheless, some remedies can be suggested. For instance, search engine algorithms could easily deprioritize cyber-suicide sites (i.e., moving them down to lower search pages); they could attach warnings to their short description of those sites. Search engines could refuse to accept paid advertisements linking to such sites. Domain name providers at the very least could refuse to accept URLs with variations of the word “suicide.” Website developers (e.g., WordPress) could remove content and/or block access to their platforms after egregious cyber-suicide content is uploaded.

Government regulation

In the final analysis, when the above remedies are not implemented by any or all concerned, government intervention becomes unavoidable. To date, the only national law outlawing cyber-suicide (digitally aiding and abetting) is the Australian Criminal Code Amendment (Suicide Related Material Offenses) Act 2005 (Prinz, Citation2008, p. 479). Some American states have legislated as well. In May 2020, the Pennsylvania House of Representatives overwhelmingly (188–14) passed a bill named “Shawn’s Law,” in memory of the aforementioned Shawn Shatto, imposing a harsh penalty for those convicted of causing or aiding suicide of a minor or anyone with autism or an intellectual disability (Murphy, Citation2020).

Other countries are evaluating and considering what steps they should take. For instance, in 2018 British Secretary of State for Digital, Culture, Media and Sport Matt Hancock warned that large social media companies could be fined billions of pounds if they do not take steps to protect internet users (Busby, Citation2018; Mac Síthigh, Citation2019). The 2019 Government’s Online Harms White Paper (HM Government, Citation2019) sets out how it intends to tackle a range of harmful content online, including encouraging or assisting suicide, containing a new statutory duty of care to make social media companies take more responsibility for the safety of their users and for tackling the harm caused by content or activity on their service platforms.

To ensure compliance, an independent regulator will oversee enforcement (MacKley, Citation2019), with the power to require transparency reports from companies delineating what steps they are taking to protect people online. These reports will be made available to the public so that people can make informed decisions about their, and their children’s, internet use (Campbell, Citation2020).

The government’s strategy requires internet companies to take robust action (whether through human moderators or algorithms that eliminate self-harm or suicide content promotion), especially when such content provides graphic details of suicide methods (HM Government, Citation2019, pp. 72–73). Moreover, it instructs internet services to “to act swiftly and proportionately when this content is reported to them by users” (HM Government, Citation2019, p. 73). Companies should also be required to block users responsible for activity that violates terms and conditions (MacKley, Citation2019, p. 87).

The role of NGOs and advertisers

Another source for cyber-suicide remediation are non-profits. The British government has formed a partnership of suicide-prevention experts that includes The Samaritans who work with online companies (The Samaritans (Citationn.d.). The Samaritans is an NGO that not only offers help and advice to people who are depressed and suicidal but it also works to “bury” suicide sites in search results, making them difficult to find. Jacqui Morrissey, a spokeswoman for The Samaritans, explained: “We don’t want that popping up on the first pages of searches … If we can’t get rid of it, let’s try and bury it, let’s make it difficult to find for people so that when they are looking for information what they’re coming across is the helpful supportive information first and foremost” (Newsbeat, Citation2020).

Advertisers can play a part in establishing a safe online environment by not supporting cyber-suicide sites. Advertisers have the power to persuade internet owners to change their business model. An example is the #StopHateForProfit campaign in the U.S. that was launched in June 2020 by the Anti-Defamation League, Color of Change, Sleeping Giants, the NAACP, Free Press, and Common Sense, demanding that Facebook stop enabling hate speech to generate ad revenue (Major, Citation2020). A boycott was declared, and consequently Facebook’s stock dropped more than 8 percent, roughly a $50 billion devaluation. In response, Facebook CEO Mark Zuckerberg announced plans to revise the company’s Code of Conduct (Clayton, Citation2020; Steiner, Citation2020).

Codes are important and need to reflect socially responsible norms, but they need to be accompanied by corresponding conduct. Recently, The Wall Street Journal published a series of articles titled “The Facebook Files” based on the detailed testimony of Frances Haugen, a former product manager for Facebook, that highlighted how Facebook made decisions that encouraged hate speech for profit, engaged in misinformation, knew that its platforms (especially Instagram) were particularly harmful to teenage girls, and ignored warnings about criminal activities that were facilitated by Facebook, including trafficking of women and drug cartels businesses (Frenkel, Citation2021; Horwitz, Citation2021). Haugen argued that Facebook put profit before people. A day after “60 Minutes” aired an interview with Haugen and after the company suffered an unprecedent site outage, Facebook shares fell by 4.9 percent (Rodriguez, Citation2021). Haugen subsequently appeared at a Congressional hearing delving into Facebook’s policies (Frenkel, Citation2021).

Several caveats are in order, as the problem does not automatically lend itself to straightforward solutions. First, it is reiterated that not all cyber-suicide sites are illegitimate. Swiss websites of “aid-in-dying” societies, or those that provide information to Oregon citizens on physician-assisted-suicide (PAS), and Canada’s supplying medical aid in dying (MAID) (Engelhart, Citation2021), are examples of legitimate sources of information. There is a significant difference between forums that explain how legal, assisted suicide is performed in certain states and countries, and forums that promote the idea of suicide as a quick way to solve temporary problems.

Second, there will obviously be some pushback against what some consider to be an abridgement of free speech. Thus, government initiatives along the lines suggested above have to be accompanied by clear language focused on supporting vulnerable people at risk, especially young people, and not the start of a slippery slope toward wider censorship. Such a declaration (and exact legislative language) is necessary not only to allay public fears of overly broad state intervention into private matters, but also to reduce the chance that the courts will overturn any such legislation on constitutional grounds (whether the First Amendment in the United States or based on customary, hoary tradition in Great Britain and other democratic regimes without a formal Constitution).

Here one should note a major concern: would such a regulatory regime “slide” into applying it to legacy (traditional) media? If external supervision of any sort is instituted, would this start us moving down on a slippery slope to significant abrogation of press freedom?

Though no one can guarantee that some legislators and social activists would not try to use this as precedent against health communication by newspapers and the electronic media, a key distinction between social media and traditional media (whether offline or online) renders such a slippery slope eventuality extremely unlikely. Social media content is largely created by the general public, and not by media or journalism professionals. Of course, not all media professionals feel bound by the traditional standards of journalistic ethics, but other than the relatively few poor-quality journalism venues, most do adhere to minimal standards, if only because the perception of serving the public (democratic) good adds value to their product (Souder, Citation2010), thus also keeping unwanted legislation at bay. One would be hard pressed to find any newspaper promoting or encouraging suicide (other than where permitted by law for very specific reasons, as noted above) – or even printing a “letter to the editor” in that spirit.

Johnson (Citation2017) discusses some similarities and differences between ethical analyses of digital intermediaries and journalistic institutions, but toward an “ultimate goal … to incorporate the former into the field of media ethics” (p. 17). Thus, if and when such a legacy medium did promote suicide, directly (written by its salaried journalists) or indirectly (as a platform for its audience), there is good reason for any legislative proscriptions as offered above to apply to such legacy media as well (as was the case of the aforementioned UK Coroners and Justice Act 2009 amending the Suicide Act 1961). Nevertheless, this should be undertaken with restrictions highly restricted to active encouragement of non-sanctioned suicide, so that they don’t “bleed” into other controversial, medical issues.Footnote12

Third and finally, none of the recommendations here should absolve parental involvement in their children’s online activity. But parents need help: just as there are numerous “porn blockers” available (McKenna, Citation2020), so too cyber-suicide blockers could be offered to parents concerned about what their children might learn in “alleviating” their anguished state.Footnote13

Conclusion

Berners-Lee (Sample, Citation2019), the inventor of the web, argued that “if we leave the web as it is, there’s a very large number of things that will go wrong. We could end up with a digital dystopia if we don’t turn things around.” Our aim here is to urge social media and broader internet intermediaries to invoke Aristotle’s Golden Mean, that for every polarity there is a mean that provides standards of reasonableness and moderation. The more intermediaries seek the Golden Mean, the better they will secure the benchmarks of a life of wellness (Aristotle, Citation1962).

The present article is one of the first in media ethics, possibly the first, to deal with the challenge of predatory websites that egg people on to take their own lives. Digital urging of suicide constitutes a significant public health problem in need of additional research in support of the development, evaluation and implementation of effective corrective technologies. We need to reach a common understanding regarding the responsibilities of internet intermediaries when content is designed to fatally harm others, and especially when it is disguised as support but in effect leads vulnerable people to take their lives during times of personal crisis. The question is not whether but rather how to demand and require internet intermediaries to invoke responsible policies and pro-active conduct in hosting and facilitating suicide websites that promote and encourage vulnerable people to consider suicide.

Internet intermediaries in general must strive to ensure that their platforms are not starkly abused by those wishing to facilitate suicide. Social media specifically have a moral and social responsibility to proactively prevent such radical anti-social activity. Too many young lives have been lost as a result of their akrasia. If companies are not willing to rein in the phenomenon of suicide facilitation on their platforms, then other social and governmental actors will have to enter the picture.

The remedies offered in this article provide a solid starting point for debate. Trial and error will eventually find the best combination of approaches, based on the specific type of audience and each national, legal-cultural environment. However, the need for action is palpably clear given the extent of the cyber-suicide phenomenon and its danger to society. Seducing vulnerable people to end their lives by exploiting their emotional stress is morally wrong. Internet intermediaries have to do the right and necessary thing by joining forces with other stakeholders, and by adopting proactive policies to save lives. The proposals offered here constitute a necessary, if not complete, important step in that direction.

Acknowledgments

The authors thank Dave Boeyink and the Editor and referees of Journal of Media Ethics for their many constructive comments.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available from the corresponding author, [RC-A], upon reasonable request.

Notes

2 Ibid.

8 Ibid.

9 Ibid.

10 One customer wrote [spelling mistakes in the original]: “Paltalk will take your money and not live up to their own rules. They didn’t return any request for assistance when I reported the harassment and stalking. And they’ve said they won’t refund, either. Which is a breach of contract since they took the money. Apparently I’m not alone. I would say definitely a SCAM!”. Another customer said: “The site uses less than honorable tactics to ‘push’ users to purchase their product. While they offer ‘FREE’ use they do not tell you the free use can DAMAGE your PC.” https://www.sitejabber.com/reviews/paltalk.com.

12 Without moving too far off the topic at hand, one could think of a few other medical issues that deal directly with life and death, that might face the same media proscriptions (e.g., euthanasia). As with every important social value, freedom of the press cannot remain inviolable to even the most egregious harm.

13 There does not seem to be any counterpart “suicide blocker” available for the internet or for social media. See: McKenna (Citation2019). In using the term “suicide blocking apps” or “suicide blocker” as a Google search term, many apps appear for psychological use, but none that enable parents or anyone else to actually block suicide sites.

References