ABSTRACT
This article poses the following questions: Do digital intermediaries (sites such as Facebook or Twitter) have a duty to prevent or ameliorate harm to victims of vile speech? Or do they have a duty to ensure that as much speech as possible gets published on their platforms? To dissect this dilemma, this article offers ethical rationales behind these competing goals. The rationale for promoting speech is founded on a concern for the facilitation of discourse democracy, while the rationale for preventing harm is based on a concern for human dignity. The article concludes by discussing issues of accountability in digital intermediaries’ self-regulatory regimes.
Notes
1. “Joint Statement” (January 11, 2015). Retrieved from https://www.bmi.bund.de/SharedDocs/Downloads/DE/Kurzmeldungen/gemeinsame-erklaerung.pdf?__blob=publicationFile
2. David E. Sanger and Amy Chozick, “Hillary Clinton Asks Tech Companies to Help Thwart ISIS Online,” New York Times (December 16, 2015): A21.
3. In a May 19, 2008, letter to Google chief executive officer Eric Schmidt, Senator Joe Lieberman of Connecticut demanded that Google remove all “terrorist training” videos from YouTube. Propounding the belief that “Islamist terrorist organizations rely extensively on the Internet to attract supporters and advance their cause,” Lieberman argued that “[b]y taking action to curtail the use of YouTube to disseminate the goals and methods of those who wish to kill innocent civilians, Google will make a singularly important contribution to this important national effort.” See “Lieberman Calls on Google to Take Down Terrorist Content,” U.S. Senate Committee on Homeland Security and Governmantal Affairs (May 19, 2008), http://www.hsgac.senate.gov/media/majority-media/lieberman-calls-on-google-to-take-down-terrorist-content.
4. Eric E. Schmidt, “Eric Schmidt on How to Build a Better Web,” New York Times (December 7, 2015), http://www.nytimes.com/2015/12/07/opinion/eric-schmidt-on-how-to-build-a-better-web.html.
5. See Tomas A. Lipinski, Elizabeth A. Buchanan and Johannes J. Britz (2002). Sticks and Stones and Words that Harm: Liability vs. Responsibility, Section 230 and Defamatory Speech in Cyberspace, Ethics and Information Technology, 4(2), 143–158, at 156. Examples of democracies (with high numbers of Internet users) whose laws follow such co-responsibility liability include: the European Union (Directive 2000/31/EC, Recital 46); Brazil (Marco Civil da Internet, 2014, Art. 18–19); India (Information Technology Act, 2008, Art. 79); Japan (“Act on the Limitation of Liability for Damages of Specified Telecommunications Service Providers and the Right to Demand Disclosure of Identification Information of the Senders,” 2001, Art. 3); South Africa (Electronic Communications and Transactions Act, 2002).
6. See Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997), cert denied, 524 U.S. 937 (1998) (holding that AOL did not materially contribute to hosting allegedly defamatory statements posted by an anonymous user to AOL’s message board service claiming that Zeran was selling apparel that disparaged the 1995 Oklahoma City bombing, despite the fact that Zeran asked that the posts be removed and that a large online community was privy to these allegedly defamatory remarks).
7. “Facebook Community Standards,” Facebook (March 15, 2015), https://www.facebook.com/communitystandards.