Defame Monster – by “Jake E”

Defamation

Situation: someone, anonymously, begins posting things about you on the Internet. Bad things, things that make you look like a scumbag. None of it is true, of course, but when a potential employer Googles you or grandma learns how to use Spokeo (tagline: “Not your grandma’s white pages.”) or one of the other dozens of data aggregators, you could be in very real, very big trouble, for something you didn’t do.

What can you do? Well, obviously, your first concern is getting the website to take down the material. So you should just talk to the website, right? Unfortunately, it’s not always that simple.

It’s true that many websites have mechanisms built in for reporting defamation. YouTube allows users to “flag” things like videos containing hate speech and user harassment. Facebook has similar flagging capabilities for hate speech and bullying. But what actually happens when you use these mechanisms to report your defamation?

Before we dive into this, let’s look at the legal basis for all of this.

The Law

What is defamation? The Restatement (Second) of Torts, § 559 defines defamatory communication like so: “A communication is defamatory if it tends so to harm the reputation of another as to lower him in the stimation of the community or to deter third persons from associating or dealing with him.” Simple enough.

But how is liability for defamation determined? § 558 of this Restatement states:

To create liability for defamation there must be:

  1. a false and defamatory statement concerning another;
  2. an unprivileged publication to a third party;
  3. fault amounting at least to negligence on the part of the publisher [with respect to the act of publication]; and
  4. either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication

Okay, well that seems fair. I can haz lawsuit now?

Not so fast. The Communications Decency Act (CDA) of 1996 adds a twist, with § 230 (c), which says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” What does that mean?

Traditionally, publishers are held liable for content they publish—they have a responsibility to make sure nothing in it is defamatory, among other things. Distributors, on the other hand, are held to a much lesser standard of liability because they’re merely “passive conduits” as Jonathan Zittrain calls them, though some liability is still maintained. CDA § 230 (c) thereby says that no service provider (here: website) can be treated as either the publisher or original writer of content that someone else posted on that site. This is to encourage websites to screen themselves for content without suddenly jumping from weak distributor liability to strong publisher liability—thus, websites can filter their own user-submitted content without fearing if they miss something they’ll have a lawsuit on their hands. This in turn encourages free speech on the Internet, because if instead sites had to filter all potentially objectionable content, everything would be heavily censored. (The reason filtering of any sort is encouraged by Congress is the fear that a lack of filtering will lead to children stumbling upon pornography.)

So a website can’t be held to publisher liability standards for things its users post. But it still must adhere to weak distributor liability standards, right? Well, no. In Zeran v. America Online, Inc. it was found that the failure to mention distributors in § 230 (c) was a Congressional oversight and, in fact, such a website can’t be held liable even as a distributor. Something posted is entirely the poster’s fault.

So if you’ve been defamed, you can at least go after the person who’s been spreading lies about you on message boards, right? Well, maybe not. Barrett v. Rosenthal expanded on the Zeran decision, noting in addition that because the CDA makes no distinction between an “active” and a “passive” user, anyone posting anything online isn’t liable for it so long as they’re not the original poster. (The court here notes that at some point a line will have to be more clearly defined for when reposting hits the level where it constitutes its own original act of content generation, but because of the case, “we need not consider when that line is crossed.” To date, it still has not been defined.)

So the person spreading lies about you isn’t liable as long as someone else said it first, and the websites on which all of these people are posting the lies can’t be held liable either. You can only sue the original poster for defamation, and that’s that.

But surely websites are cooperative if you’ve got a real claim that someone is defaming you? Well, perhaps. Before we answer this, let’s look at the other reason a user-generated-content website would have its content filtered or removed: copyright.

The DMCA

The Digital Millenium Copyright Act (DMCA) of 1998 paved the way for copyright holders, in particular music and movie distribution companies, to strongly protect their copyright claims, even in an age when the spreading of such files on the Internet has become commonplace. If YaleLawTech Records holds the copyright for “Don’t Mess with that CSS,” a popular song that they’ve found has been illegally used as a catchy background to a YouTube video, YLT Records simply has to send a takedown notice, specified in the DMCA, to YouTube. YouTube even has a standard webform to fill out if you don’t want to go to the effort to hire a lawyer.

When it receives such a takedown notice, it is in the best interests of YouTube (and similar sites) to simply remove the allegedly copyrighting content, without any examination into the validity of the copyright claim. By removing the video, YouTube becomes immune to all copyright infringement liability according to the DMCA, without which it would be facing thousands of copyright infringement suits. If on the other hand YouTube chooses not to remove the content immediately, even if such inaction is justified, YouTube would be inviting lawsuits and thus legal fees that it could live without.

In short, if a website receives a DMCA takedown notice, it’ll take down the questionable material immediately, no questions asked, without worrying whether or not the material was actually infringing upon a copyright and the takedown notice was actually valid.

Comparing Copyright to Defamation

So a website will take down allegedly copyrighted materials immediately because of the safe harbor the DMCA grants it for doing so. Shouldn’t it do the same for defamation? Unfortunately, because of the clear protections of the CDA, most websites have very little incentive to do so. If an item defames you, the website hosting it is immune from all defamation liability and as long as it’s not actually harming business to keep the material up (like hate speech might). So what do different websites actually do?

Facebook

Copyright Infringement

On filing DMCA takedown notices, Facebook says: “We will make every effort to review your report as quickly as we can. So long as everything appears to be in order, we will promptly remove or disable access to the content. We will also notify the user and, if requested, provide your report to the user. We will terminate repeat infringers when appropriate.”

Facebook has an automated DMCA form for filing DMCA takedown notices, and additionally lists the mailing address of its DMCA Designated Agent and has other info on its copyright help page.

Defamation

Facebook’s Terms of Use state: “You will not bully, intimidate, or harass any user.” How closely this ties to defamation is unclear. Further, if the person being defamed is not him/herself a Facebook user, the only part of the Terms that relates is: “You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory,” which is weak and indirect in this case.

To report “abuse” (violations of the Terms of Use), Facebook has a “Report _____” link below every image, message, video, event, etc. Nowhere does Facebook mention how long they will take to respond to reported content, and many times they state that “reporting … content doesn’t guarantee that they or it will be removed.”

YouTube

Copyright Infringement

YouTube is famous for its quick responses to DMCA takedown notices. It has, as mentioned above, a copyright complaint webform which can be filled out in less than a minute. For mass claims, YouTube created a Content Verification Program, whereby copyright holders (read: record labels) can submit an application verifying that they are a copyright holder (which implicitly acknowledges that the standard copyright complaint webform doesn’t really check for one’s identity) and then use YouTube’s “industry-leading Content Identification and Verification Tools.” These tools essentially let copyright holders have YouTube automatically scan videos for copyrighted content and then automatically either implements predetermined actions to monetize, record tracking data on, or block those videos infringing your copyrights. With this, copyright holders no longer even need to find or alert YouTube to copyrighted content, they can simply sit back and have search spiders make them money.

Defamation

YouTube’s Community Guidelines (themselves a section of the Terms of Service) says: “Things like … harassment … are taken very seriously. Anyone caught doing these things may be permanently banned from YouTube.” No timeframe, no standards, nothing.

YouTube, like Facebook, has a system of having users “flag” content as “inappropriate” and says only that its staff reviews flagged videos (no mention of comments, etc.) “24 hours a day, seven days a week.” Great.

MySpace

Copyright Infringement

MySpace seems to be behind the times in not having any automated webform for submitting DMCA takedown notices (their fastest method is by writing your own notice and emailing it to them). For further evidence that MySpace is behind the times, note that its second method of communication is “facsimile.” MySpace does not include a timeframe for how quickly they will respond to such notices.

Defamation

Harassment and cyberbullying are against MySpace’s Terms of Use, and such activities can be reported by using MySpace’s built-in all-purpose “Contact MySpace” form and including the offending user’s “friend ID” and a screenshot of the offending behavior. Just today, MySpace uploaded a new document in its help center on harassment, directing users to either block the harassing user or, if in danger, to call 911—no longer mentioning reporting this behavior to MySpace. MySpace says, “We promise to check it out and do our best to get back to you within 48 hours,” which is the closest thing to a timeframe any of these sites have stated.

Twitter

Copyright Infringement

Twitter also has an automated DMCA takedown notice webform. It also lists an email address that can be used for such complaints in the event the webform does not work. They don’t mention a timeframe for responses.

Defamation

Twitter only responds to serious, violent threats and violations of personal privacy, period. Everything else, you should just block. No mention of timeframes for Twitter responses.

Flickr

Copyright Infringement

Flickr has, surprisingly, no clear DMCA takedown notice procedures. Instead, it has a link to a page on filing DMCA notices with Yahoo!, which lists an mailing address, phone number, fax number, and email address (in that order) to contact. No mention of a timeframe for a response.

Defamation

Flickr uses Yahoo!’s Terms of Service, which only says users agree not to “‘stalk’ or otherwise harass another” as the closest thing to defamation. Flickr has a “report abuse” button to use for reporting this behavior, with no clear timeframe for a response.

Google+

Copyright Infringement

Having no real procedure for this (as it’s not really an issue yet), it seems DMCA notices must be sent directly to another Google address. Copyright isn’t mentioned anywhere in its User Content and Conduct Policy. Based on anecdotal evidence Google has removed offending content in six days, after at least something of a conversation with the complainant (no auto-removal systems in place).

Defamation

Google+ has “Report Abuse” links for what its User Content and Conduct Policy calls “violent or bullying behavior.” Libel, per se, isn’t mentioned. No idea of a timeframe for responses

Summary

As shown in the disparities between services’ treatment of DMCA takedown notices and claims of libel/defamation, websites are much more interested in protecting copyright claims, especially those of big companies, than in protecting the average person from libel. But we can’t blame these websites; their actions are a direct reaction to the differences in liability granted by the DMCA and the CDA. If we want to see change, this needs to be reflected in our laws. Should websites more strictly police alleged libel, or does that run the risk of encouraging censorship of free speech? Should websites put more effort into discerning the validity of copyright infringement claims, or does that disincentivize progress? However we stand, if we don’t agree with the system now we must reevaluate our legal policies because the implementations reflect the laws, and the laws are clear: copyright is protected at all costs from infringement, but not the reputation of the individual from defamation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s