Harassment is no new phenomenon on the Internet. As astutely stated by Penny Arcade, normal person + anonymity + audience = Total ****wad. (Don’t click if cursing offends!) But for websites that present a platform for user-generated content, harassment has presented interesting new problems. Are these websites liable for user content? Are they simply providers of the platform on which users place content or are they themselves content providers?
Two years ago a Twitter user was harassing a woman named Ariel Waldman over the social networking website. After submitting a complaint to Twitter, an administrator removed the tweets from the public time line. However, when Waldman asked that the user be banned, she received a reply that the comments were not illegal, simply malicious, despite the fact that Twitter explicitly lists harassment as a violation of the Terms of Service, and reserve their right to remove content and users in violation of their Terms of Service (though are not obligated to). Twitter promised to revise its Terms of Service to clarify its enforcement policies, but would do nothing further, for fear of a lawsuit. Jason Goldman, the product manager for Twitter, stated in a post at getsatisfaction.com that “As a communication utility, Twitter does not get involved in disputes between users over issues of content except in very specific situations. Twitter is a provider of information, not a mediator.” In the same post, Goldman also stated that the company was working on improving its blocking feature, which could solve the problem of harassment. Problem solved. The issue was only with Twitter’s ToS, its enforcement of it, and their insufficient blocking tool. Not quite.
John Dozier Jr., an expert in Internet law, says that Twitter may have surrendered its safe harbor under the Communications Decency Act when it edited the user’s page, which it did when it removed it from the public feed. “If they’ve edited content based on their subjective perspective, they put their immunity at risk and virtually their entire online business, because then they’d be liable to defamation claims or anything else that a publisher would,” Dozier says. What if a website had knowledge of the harassment – had seen it, had watched it continue, and were fully aware of the problems it was causing – but did not take any action? Could this lack of action, this implicit approval (or at least lack of disapproval) of the content entangle them in the same liabilities that publishers face? Madeline Rodriguez takes this a step further in her article “Reexamining Section 230 of the CDA and Online Anonymous Speech: Defamation on the Internet and the Websites That Facilitate It.” She says, “if an ICP [internet content provider], and certainly an ICF [internet content facilitator], has reason to know, or anticipate, that at least some of the postings on their website are defamatory, false, anonymous, annoying, or harassing in nature, then they will be considered a co-author of the defamatory, false, anonymous, annoying, or harassing postings, and just as liable as the original poster, losing their CDA 230 immunity completely, and be subject to full monetary, legal, and equitable damages by an aggrieved party.” This seems to suggest that Twitter is co-author of the harassing posts, and as a creator, could be at risk of a lawsuit.
Malcolm Coles, a UK blogger who believes that Italy made the right decision in deciding against Google, argues that these Google executives are absolutely responsible for the content published on their website, because they are responsible for the systems that police that content. While the problem that Madeline Rodriguez presents only seems relevant to cases in which the website owners have actual knowledge of illegal activity, Coles suggests that Google indeed should be able to have actual knowledge of every incidence of illegal activity on their website. He blames Google for not having an adequate policing system, since the video was taken down two months after it was posted (though only two hours after an official request by the Italian police). Coles claims that Google facilitated the posting of the content through its negligence, and that it “has ongoing control over that platform” which involves it in content publishing in a way it does not a “camera-maker or tissue-box manufacturer.”
Harassment presents a new problem for user-driven content websites. It is difficult to find, unlike copyright infringements, often ambiguous, and is obviously a sensitive subject. It also involves publishing original opinions and user-generated content in a way that infringement does not. While I certainly don’t think that websites such as Google, YouTube, Twitter, or the New York Times (for its comments sections) should be held liable for what others post, it is interesting to look at the fine line between facilitator and provider that these sites are walking in the case of harassment.
You’re right that this is interesting, but I disagree with Dozier above, if not in the content of the specific wording in the CDA, then in the writing of the law. It seems to me that websites like Twitter, Facebook, and Google should have the same protections in place in regards to harassment as they do for copyright infringement. In that case, this was not a “subjective perspective” that led to the editing of the page, but a specific grievance addressed after a report was filed. This fits in line with the thinking in DMCA 512c. I understand that harassment is not copyright, but why are there different standards governing two illegal acts perpetrated by users?
LikeLike
The way the laws are structured seems silly to me. If knowledge of harassment removes the CDA safe harbor, companies seem more likely to take the most hands-off route possible. compounding the problem, the volume on these websites is way too large to monitor effectively. It’s just encouraging companies not to take any subjective action which will only serve to harm victims of harrassment even more. I would think slightly flawed enforcement is better than nothing at all.
LikeLike