One example of U.S legislation which excuses social media companies from liability towards instances of communication that promote hatred and violence today is the Safe Harbor Provisions. In the Safe Harbor Provisions, section 230 was not designed with social media platforms in mind. Hate speech, racial violence, and terrorist organizations use social media platforms to spread their messages. According to Tarleton Gillespie,
“The U.S Congress crafted its first legislative response to online pornography, the Communications Decency Act, as part of a massive telecommunications bill. Passed in 1996 the CDA made it a criminal act, punishable by fines and/or up to two years in prison, to display or distribute ‘obscene or indecent’ material online to anyone under age eighteen. During the legislation process, the House of Representatives added a bipartisan amendment drafted by Christopher Cox and Ron Wyden, largely as a response to early lawsuits trying to hold ISPs and web-hosting services liable for defamation by their users. It carved a safe harbor for ISPs, search engines, and ‘interactive computer services providers’: so long as they only provided access to the Internet or conveyed information, they could not be liable for the content of that speech” (Gillespie 30). Although the CDA had seemed to reach a bill, it was deemed unconstitutional because “the court ruled that CDA overreached in terms of what content was prohibited, it extended its protection of minors to the content available to adults, and it did not deal with the question of whose community norms should be the barometer for a network that spans communities” (Gillespie 30). What is important is that the safe harbor amendment was unchanged. The safe harbor, also known as Section 230 of the U.S telecommunication law, has two parts. Gillespie explains that “The first part ensures that intermediaries that merely provide access to the Internet or other network services cannot be held liable for the speech of their users; these intermediaries will not be considered ‘publishers’ of their users’ content in the legal sense. . . The second, less familiar part adds a twist. If an intermediary does police what its users say or do, it does not lose its safe harbor protection by doing so” (Gillespie 30).
Tarleton Gillespie’s book chapter called “The Myth of a Neutral Platform” refers to the social media platform being a place for users to be safe from hate speech, racial violence or terroist organizations, but that is not the cause because of freedom of speech. According to Dan and Patrick’s presentation, there is this “Media Interference”. This media interference “makes it very easy to put out an ad that promotes hate speech or promotes your political views. It can be hard to deal with or pick out negative ads because authors are often protected under ‘freedom of speech’” (Patrick, Dan). In addition, according to Dan and Patrick’s presentation, “Many groups of people are attacked every day on Twitter and Twitter is notoriously bad at monitoring it. Women, LGBTQ+, racial and ethnic minorities, and participants of subcultures are largely the groups that are targeted” (Patrick, Dan).
Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, 2018.