Please Don’t Kick Gingers – by “Jennifer S – YLT2012”

As a girl who some may call “ginger,” with hair that looks occasionally red in certain lighting, I vividly remember the origins of “Kick a Ginger Day.” Thankfully no one kicked me, likely because of what I like to call the Gendered Ginger Effect (see Exhibit 1).
Exhibit 1: Gendered Ginger

The anti-ginger sentiment can be traced to the 136th episode of my favorite television show of all time, South Park. Aired on November 9, 2005, the episode, titled simply “Ginger Kids,” depicts Eric Cartman leading a Nazi-like campaign against pale, freckly redheaded kids, arguing that gingers are inhuman and soulless.

Exhibit 2: The Gingers of South Park

South Park‘s devoted fan base typically ranges from high minded intellectuals laughing at social commentary (I like to think this is me) to mindless high schoolers laughing at fart jokes. Interestingly, as a precocious 14-year-old, I thought the episode was about how a charismatic leader with violent ambitions can influence the masses. Instead,  the more juvenile fan base seriously latched onto the anti-ginger frenzy. Over the years this spawned a series of Facebook event pages, leading to the naming of November 20, 2008 as “National Kick A Ginger Day.” By that date, more than 5,000 people internationally had joined “Kick a Ginger Day.”

The participation in “Kick a Ginger Day” didn’t end on a Facebook wall. On November 20, 2008, redheaded kids in the United States, Canada, and the United Kingdom faced taunts, torments, and violence from their fellow classmates at school. One high school in Vancouver suspended 20 kids for beating up their ginger classmates. School administrators were baffled–how could an online group spur such violence? Did assaults on gingers, driven by “Kick a Ginger Day,” amount to hate crimes?

The “Kick a Ginger Day” event was created by a 14-year-old Canadian boy who faced an investigation for provoking what officials believed to be a rash of hate crimes across national boundaries. The boy has since apologized, and the page is no longer on Facebook. Ginger kicking did not end in 2008, however, and assaults have continued annually on November 20th. An entire website, www.kick-a-ginger-day.com, is devoted to the ongoing “celebration” of kicking gingers.

A now-infamous 2010 YouTube video, posted by a frustrated redheaded boy, further fueled online ginger hatred.

Exhibit 3: Gingers Do Have Souls

Ginger-bashing online escalated into quite literal ginger-bashing in the real world. What started as a comical commentary on racism in the fictional world of South Park evolved into the beating and bullying of real high school kids through the dissemination of hatred via social media. This is not quite cyber-bullying: these ginger kids were facing the brunt of the beatings in school hallways, not in online messages. This would be difficult to bring to trial as an instance of defamation, since no particular redheaded individuals were targeted in the online speech against gingers. This is, however, certainly hate speech: it legitimizes violence against a group of people. Free speech online does not equate to the free battery of ginger kids.

 

Final Project: Defamed (Part 1) – by “John G”

As a suite, we decided to write a rap as an educational piece, lecturing small children about the risks involved in hateful speech and defamatory claims against an individual/others. The introduction begins with a terse explanation of defamation in U.S. law and common defenses in court. Transitioning into the topic of defamation per se, the rap speaks about the difference of defamation per se as compared to regular defamation, specifically, that damages are assumed for defamation per se.

Utilizing celebrity cameos, the rap introduces the four specific instances of defamation per se and continues to provide detailed circumstances under which each could be found applicable or a notable exception. Explicitly, the four categories are allegations or imputations injurious to one’s profession, of criminal activity, of loathsome disease, and of unchastity, which is duly noted in the rap’s chorus.

In addition to the four instances of defamation per se, Internet libel laws are also discussed as a means of exhibiting the relevance of defamation laws in modern culture and technology.

We aptly decided to construct this project as a rap song in order to cast the subject matter of defamation into the medium of aggressive hip-hop, a genre which is often plagued with defamation within its context, thus creating a parody of the genre and of defamation itself – allowing us to discuss and commit speech acts that might otherwise be construed as defamatory.

With much serendipity, we invited many famous artists from the hip-hop industry to spit their game on this track. In a surprising turnout, we were able to have featured performances by The Ying Yang Twins, Chris Ludacris Bridges, Nicki Minaj, Rick Ross, Eminem, T-Pain, Dr. Dre, Jamarius Brahamz, Gangreeeeeeen, and Notorious B.I.G. (posthumously). Unfortunately we could not produce a promotional video due to scheduling conflicts and the fact that one individual is currently deceased. Much to our surprise, our producers have signed a contract for another track to be released in the near future. Follow us on twitter @twitter.com/FratCity.

Here is the link to the song: http://soundcloud.com/defamed-part-1/defamed-part-1-final

Jamar Bromley
Matthew Prewitt
John Greenawalt

Accountability and the Internet – by “William K”

A Slanderous Thing Happened on the Way to the Forum

As we all know, Section 230(3) of the Communications Decency Act of 1996 made it so that websites couldn’t be sued for hosting user comments and submissions in the same way they could if they were hosting their own original material. This made sense at the time for a variety of reasons. A website couldn’t be expected to monitor and filter every incoming post. Additionally, if you found the website accountable for material posted by a third party because the website made some effort to filter out objectionable material, it would discourage websites from making any effort to filter at all. However, what this legislation and the thinking around it failed to predict was the emergence of websites that existed solely to spread gossip and other malicious postings. While there are sites and forums that host a combination of both useful and derogatory messages, there have emerged sites since the original legislation that prosper solely by providing an unfettered forum to such messages. Sites like AutoAdmit, CollegeACB, and others are able to thrive simply because they promote gossip and unsubstantiated rumors.

In Stratton Oakmont, Inc V. Prodigy Services Co.  the court ruled that Prodigy was liable for postings made on the website because it acted in an editorial role by attempting to remove some messages. The controversy caused by this decision lead to the passage of Section 230(c) granting the provider of an internet service immunity if the information was “provided by another information content provider.” This seemed logical because companies like Prodigy weren’t dependent on their reputation as a website for defamatory information or postings. In contrast, sites like AutoAdmit, CollegeACB, and Juicy Campus essentially advertised themselves as a place to post and read defamation. They essentially take Section 230(c) as carte blanche to provide the atmosphere for harmful behavior without providing any of it themselves, thus rendering themselves immune.

 

Reputable Journalism

To me, these sites have a strong parallel to Napster, Kazaa, etc — their entire appeal is derived from the fact that they let users submit and share content that is essentially illegal. Had either Napster or AutoAdmit originally committed the copyright infringement or written the slander instead of merely providing an avenue to do so, it would have been obvious that they should be found legally responsible. However, because they’re simply the conduit by which illegal material is disseminated, a much grayer area emerges. Much like Kazaa, where a vast majority of files were found to be copyrighted and thus illegal to share, gossip sites rely on slanderous rumor and gossip to be successful. They advertise themselves as places where illegal activity is encouraged, but gossip sites still hold immunity because of the provisions of Section 230(c).

An interesting parallel between the two types of sites can be found in the way they evolved. After Napster was shut down, Kazaa sought to fill the void. Kazaa bought ads so that when someone searched for “Napster Replacement” or similar terms, they were brought to the Kazaa download site. In this case, these actions were considered “Inducement to Copyright Infringement.” Somewhat similarly, after JuicyCampus closed, the founders of College ACB made a deal with the founders of JuicyCampus to redirect traffic from Juicy to College. Why is this not considered “Inducement to Defamation” or “Inducement to Libel?” Peter Frank, CollegeACB’s founder, was clearly trying to mimic the success of Matt Ivester, JuicyCampus’s founder.

They Even Pose Kinda Similarly

Obviously, there is a distinction between gossip sites and P2P sharing sites. P2P sites are much easier to prosecute because either something is copyrighted or its not; there’s not the same gray area of whether or not the law has been broken as there is with the harder to define the crime of libel. However, the way that gossip sites solicit, encourage, and depend on defamation draws inarguable parallels with P2P sites and copyright infringement. However, unlike the artists whose music was being pirated, those slandered on gossip sites have no huge corporations looking out for their interests. There’s no Brittan Heller & Heide Iravani Industry Association of America to sue these websites and push for legislation when someone gets called a whore.

 

If This Were Truly Representative of College ACB, Every Figure Would Have a Gun or Their Genitalia Out

 

 Adapting to an Anonymous World

One of the things that makes the internet so susceptible to slander and defamatory remarks is the idea of anonymity. Its obvious that people feel emboldened when they’re online to post things to a public forum that they’d never go around shouting in the dining hall. The end result of this is that there’s a lot of outpouring of hateful and spiteful messages that don’t exist with the same frequency or regularity that they do outside of cyberspace. This breeds a more malicious culture with a more harmful nature.

However, with this heightened tendency to post derogatory and embellished things comes a heightened degree of skepticism from the part of the readers. Just because I read about a girls lascivious nature online doesn’t mean I’m going to believe it. Just like if you heard your school’s big gossip, with a tendency towards making things up, say something about someone doesn’t mean you’d believe them. You learn to take everything on the internet, but especially those posted from anonymous sources, with a Roman legion’s annual salary worth of salt.

Like This, But You Know, Symbolically

 

Attributing Quotes to Anonymous and Unintended Consequences

While posters may be emboldened by their online anonymity, they generally overlook a simple fact: they’re not actually anonymous. Levels can be taken to secure one’s identity when posting online. A combination of technology, proxy servers, and browsers like Tor make online posting significantly more anonymous. But the thing is, even ignoring the fact that these technologies usually won’t make users completely anonymous, most users don’t bother with these precautions anyway.

There’s something unsettling about the internet. Without directly interacting with another human being, without speaking a single syllable or putting a single word on a tangible piece of paper, from the comfort and solitude of one’s own home, we can still spread our opinions or rants to the entire audience of the internet. But people don’t take the time to think about how clicking submit on a website will immediately and permanently put their thoughts or slander out into the world.

Anonymous, Just Ignore the William K in the Top Right

Posters are emboldened to post things they wouldn’t say because posting online doesn’t seem like it could have the same ramifications as saying something or writing something in the real world. However, as the litany of litigation bringing libel charges against “anonymous” users in the past few years has demonstrated, there can still be serious consequences to online postings. These lawsuits make sense, people are committing acts of defamation quite maliciously. If someone makes claims that you cheated on every test or that your accolades were undeserved, potential employers may question your character. This brings up an interesting distinction between real world gossip and that which appears online. Whereas college campus used to be confined to the campus, it is now out there for anyone to see. Someone may post something malicious as part of a petty fight meant solely to be read by the poster’s classmates. But these posts are accessible to anyone — parents, school administrators, and perhaps most worrisome (to Yale kids anyway) future employers. However, as I pointed out in the previous section, people generally don’t things posted on the internet, anonymously, as seriously as they would claims from a credible source. In this way, it would seem that online libel poses a threat, but not to the same extent as defamation in the physical world.

However, the fact that people can be prosecuted for things they post “anonymously” online does bring worries about the extent to which the government can uncover information about a poster. Something posted online doesn’t necessarily reflect a person’s mindset, beliefs, or their intentions. In the case of Heller and Iravani, AutoAdmit posts about raping one of the women probably didn’t indicate any actual intent to commit rape. These posts were disgusting, vile, and clearly unacceptable, but shouldn’t be taken at face value. These posts are certainly threatening, but they don’t constitute a threat in the sense that the poster had the determination or intention of committing the act. Should they be punishable? Probably. Do they deserve the same scrutiny as other threats? Probably not.

It is in the previous example that we see a problem with the government being able to know exactly who posted everything. In the case of Watts v. United States, the Court addressed hyperbolic threats made against President Johnson. In that case, the defendant jokingly made a remark insinuating he would kill the President, after which he looked down the barrel of an imaginary rifle. The comment was met with applause and laughter and didn’t constitute a serious plan to assassinate LBJ. It was within this context that the Court was able to decide that Watts didn’t actually intend to kill the President. The Court explicitly said “taken in context, and regarding the expressly conditional nature of the statement and the reaction of the listeners, we do not see how it could be interpreted otherwise.” The Court ruled that hyperbolic threats against the President needed to be differentiated from legitimate threats. Online, it can be extremely difficult to understand the context of comments and things can easily be taken more or less seriously than they should. The reactions of an online audience will also differ drastically from the reactions of a physical audience. Unlike in the physical world a comment is made to a predetermined audience with a certain, somewhat known mentality, postings online are generally made to a more unknown audience that can actually change and interact with the post after it has been said. Whereas a bunch of nutjobs won’t be able to bust into a student discussion after someone makes a joke about an assassination and then elaborate on a real plan, radical internet users could come across a joking post and then take it in an unintended direction. While federal investigations would hopefully be able to determine the nature of the comments, its an uncharted area. There’s a lot more unknown factors at play on the internet.

Hey Guys. Sorry, You Weren't My Intended Audience.

Let’s Wrap It Up

Essentially, what all of these issues come back to is an issue of accountability. For whatever reason, we tend to think of the internet as a place separate from the physical world. Its the same ridiculous mistake John Perry Barlow made in his A Declaration of the Independence of Cyberspace and it affects us everyday. You can claim that there’s no government to stop us or to censor us, but let’s face it, as long as you still live on this planet, there’s someone who can punish you for what you do. While many of the things that happen online may stay confined to cyberspace, there’s certainly no guarantee of such a thing. Because so much of the stuff that’s said on the internet is so inconsequential, some people might start to think there are no consequences online, but that’s obviously not the case.When you post something, it can be read, it can be discussed, and you, the physical you, not some avatar or username may have to pay the consequences.

Julian Assange: Champion of Freedom of Expression or Criminal? – by “Nick M”

Censorship vs. Freedom of Expression

In the United States, freedom of speech is the very first protected right listed in the Bill of Rights. As the First Amendment reads:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

The Supreme Court has had many interpretations of this amendment over what “freedom of speech” in the First Amendment actually implies. Although one might think of freedom of speech as closely associated with our identity and heritage as Americans and thus a ubiquitously accepted right, there have been several controversial rulings on this issue, especially in cases where one individuals’ freedom of speech might be perceived to infringe on others’ preserved rights (ie: if my freedom of speech to shout fire in a movie theater infringes on your freedom to not be trampled).

Indeed, practically speaking, it is apparent that simply not all types of speech can be tolerated for a society to function. Some forms of speech are accordingly plainly and thoroughly outlawed by US law, such as fraudulent advertising, child pornography, fighting words, words used in a criminal transaction, unlicensed broadcasts, copyright infringement (hello DMCA), libel, slander, and threats, among others. Most of these forms of speech are restricted because they have a compelling government interest: the US government may regulate, or censor speech if it has a compelling interest, is a public concern, or threatens national safety.

 

All the cool kids are doing it

For example, it is even considered legal to express certain forms of hate speech as long as one does not actually do the activities or encourage others to do them. However, once these groups overstep their boundaries and their actions can be interpreted as violating a compelling government interest, they can (and have) been regulated. For example, the Ku Klux Klan has been denied certain marching permits (a real tragedy) and the Westboro Baptist Church (which became famous recently for protesting military funerals) was sued for its activities (however the ruling was later controversially overturned on appeal in the US Supreme Court). These examples illustrate that while legal history has defined certain finite limitations on the freedom of speech, courts have ultimately historically held that in order for freedom of speech to exist, it must necessarily be protected to allow the unpopular, offensive, and distasteful.

Background on Wikileaks

The “Wikileaks controversy” is a great example of the tension between this freedom of expression and censorship. Wikileaks (NOT TO BE CONFUSED WITH WIKIPEDIA) is the name of an international non-profit organization run by founder, editor-in-chief, and director Julian Assange, that publishes submissions of private, secret, and classified media from anonymous news sources, news leaks, and whistleblowers. Since it went online, the site has published an extensive catalogue of secret material, ranging from materials on procedures at Guantánamo Bay, to the contents of Sarah Palin’s private email account. Look at Trigg!!!!

What Assange and his Wikileaks team are doing is technically not illegal under international law nor under various countries’ laws (more to come on this later); nonetheless, several nations (notably Assange’s home country of Australia, China, Germany, Iceland, Thailand, and the United States) have limited access, or in some cases blacklisted and completely blocked all traffic to the site. The United States has blocked access to the site in various government agencies in addition to issuing several other 1984-reminiscient demands. I’m insulted they didn’t threaten Yale. (Although this claim was later refuted by government officials…).

Larry Flynt Reincarnate- Another Champion for Freedom of Expression?

Julian Assange

 

 

 

 

 

... and Larry Flynt, separated at birth?

 

 

Assange himself believes that Wikileak’s role (and his on Earth apparently) is to expose injustice, not to provide an even-handed record of events. In an invitation to potential collaborators in 2006, he wrote, “Our primary targets are those highly oppressive regimes in China, Russia and Central Eurasia, but we also expect to be of assistance to those in the West who wish to reveal illegal or immoral behavior in their own governments and corporations.” He has argued that a “social movement” to expose secrets could “bring down many administrations that rely on concealing reality—including the US administration.”

Many agree that Assange’s work is beneficial and even noble, believing that by increasing transparency of government operations, Assange will ultimately force governments to act in more accountable manners. Calling Assange a “champion of freedom of speech,” proponents of his work believe that Assange provides information that the public has a right to know, and that both international and US efforts to suppress his efforts constitute a significant threat to freedom of expression world-wide. Proponents of his cause believe that the right to freedom of information outweighs the potentially dangerous effects of revealing US military strategy, pointing to the fact that none of the published cables were kept at the highest levels of secrecy, inferring from this that nothing truly sensitive has been revealed. Organizations such as Reporters Without Borders (RSF) has condemned the “blocking, cyber-attacks and political pressure” directed at the cables’ website from all over the world, and expressed concern at comments made by American authorities “concerning Wikileaks and its founder Julian Assange.” “This is the first time we have seen an attempt at the international community level to censor a website dedicated to the principle of transparency,” RSF said.

Indeed, Assange’s work has been received to some international acclaim, as the Wikileaks foundeer has received a number of awards and nominations, including the 2009 Amnesty International Media Award for publishing material about extrajudicial killings in Kenya and Readers’ Choice for TIME magazine’s 2010 Person of the Year.

Or a Criminal?

However, despite the seemingly good intentions of Assange’s work, his work has had serious repercussions. Some of the information that his organization has published includes confidential military documents that reveal great deals of US strategy and policy. As Wikileaks makes this information publically-accessible, Assange’s work has potentially compromised US national security, essentially placing in danger not only the lives of soldiers who rely on the secrecy of these documents, but also the lives of citizens at home who are now more vulnerable to attack.

Claiming that his information compromises national security, the US Justice Department has attempted to prosecute Assange under the Espionage Act, which makes it broadly illegal to compromise national security by interfering with the US military. In 2011, an unknown person in Cambridge, Massachusetts, had received a subpoena regarding the Espionage Act’s “conspiracy” clause 18 U.S.C. § 793(g), as well as the federal embezzlement law 18 U.S.C. § 641, a statute used in some other Espionage Act-related cases. A grand jury has begun meeting in Alexandria, Virginia, to hear evidence and decide whether an indictment should be brought.

However, critics of the legal approach of charging Assange under the Espionage Act argue that the broad language of the Act could make news organizations and anyone who reported, printed, or disseminated information from Wikileaks subject to prosecution as well. This slippery-slope argument might ultimately undermine this attempt to prosecute Assange, as further spinoffs from this type of reasoning might be interpreted as uancceptably limiting freedom of expression (if a magazine publishes an article from a magazine that publishes an article from a magazine that publishes an article from Wikileaks – WHERE DOES IT END!!?!?!).

Despite his steel-clad safe haven behind these concerns, Assange has faced a growing number of other problems, including rape charges in Sweden (Update: Good news for Assange! The rape charges have been dropped….but replaced with….?) and having his assets frozen by a number of banks. He does not operate out of an office, but rather remains on the move for extended periods of time in order to avoid extradition to countries that would be eager to repay him for his “noble work.”

My Take

Has the US acted correctly in its response to Assange and Wikileaks? Should our censorship laws be altered to prevent this type of unwanted freedom of expression? Does their inability to prosecute (as of yet) mean that Assange is without blame?

Yes, no, and probably not. The fact remains that our First Amendment technically protects his right to freedom of expression, and, just like protecting the right to protest military funerals, if we want to stay true to our traditions of maintaing a society of freedom of speech, Assange shoudl not be prosecuted for his Wikileaks-related work. Our censorship laws in this regard, should not be fundamentally changed in order to close a loophole that Assange is seemingly exploiting. Thus, the fact that the US (at best) is proceeding cautiously with charging Assange is the correct response if we wish to maintain true to our traditions.

This does not, however, mean that I believe Assange to be a noble champion of our First Amendment rights. I believe his actions to be wrong, plain and simple. The fact remains that in order for a government to function properly, not all information can or should be transparent. As a citizen, I willingly abdicate my right to know this information, trusting my government to make certain determinations for me. Voting with my feet, I can choose what country to live in, and what  government to trust (granted, this is not possible for everyone, but the concept is clear).  Thus, Assange’s actions don’t increase the global levels of democracy through transparency of government operations, in my opinion, but rather make the world a more dangerous place for me to live in, as a result of the increased knowledge of the US military’s vulnerabilities.

Defame Monster – by “Jake E”

Defamation

Situation: someone, anonymously, begins posting things about you on the Internet. Bad things, things that make you look like a scumbag. None of it is true, of course, but when a potential employer Googles you or grandma learns how to use Spokeo (tagline: “Not your grandma’s white pages.”) or one of the other dozens of data aggregators, you could be in very real, very big trouble, for something you didn’t do.

What can you do? Well, obviously, your first concern is getting the website to take down the material. So you should just talk to the website, right? Unfortunately, it’s not always that simple.

It’s true that many websites have mechanisms built in for reporting defamation. YouTube allows users to “flag” things like videos containing hate speech and user harassment. Facebook has similar flagging capabilities for hate speech and bullying. But what actually happens when you use these mechanisms to report your defamation?

Before we dive into this, let’s look at the legal basis for all of this.

The Law

What is defamation? The Restatement (Second) of Torts, § 559 defines defamatory communication like so: “A communication is defamatory if it tends so to harm the reputation of another as to lower him in the stimation of the community or to deter third persons from associating or dealing with him.” Simple enough.

But how is liability for defamation determined? § 558 of this Restatement states:

To create liability for defamation there must be:

  1. a false and defamatory statement concerning another;
  2. an unprivileged publication to a third party;
  3. fault amounting at least to negligence on the part of the publisher [with respect to the act of publication]; and
  4. either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication

Okay, well that seems fair. I can haz lawsuit now?

Not so fast. The Communications Decency Act (CDA) of 1996 adds a twist, with § 230 (c), which says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” What does that mean?

Traditionally, publishers are held liable for content they publish—they have a responsibility to make sure nothing in it is defamatory, among other things. Distributors, on the other hand, are held to a much lesser standard of liability because they’re merely “passive conduits” as Jonathan Zittrain calls them, though some liability is still maintained. CDA § 230 (c) thereby says that no service provider (here: website) can be treated as either the publisher or original writer of content that someone else posted on that site. This is to encourage websites to screen themselves for content without suddenly jumping from weak distributor liability to strong publisher liability—thus, websites can filter their own user-submitted content without fearing if they miss something they’ll have a lawsuit on their hands. This in turn encourages free speech on the Internet, because if instead sites had to filter all potentially objectionable content, everything would be heavily censored. (The reason filtering of any sort is encouraged by Congress is the fear that a lack of filtering will lead to children stumbling upon pornography.)

So a website can’t be held to publisher liability standards for things its users post. But it still must adhere to weak distributor liability standards, right? Well, no. In Zeran v. America Online, Inc. it was found that the failure to mention distributors in § 230 (c) was a Congressional oversight and, in fact, such a website can’t be held liable even as a distributor. Something posted is entirely the poster’s fault.

So if you’ve been defamed, you can at least go after the person who’s been spreading lies about you on message boards, right? Well, maybe not. Barrett v. Rosenthal expanded on the Zeran decision, noting in addition that because the CDA makes no distinction between an “active” and a “passive” user, anyone posting anything online isn’t liable for it so long as they’re not the original poster. (The court here notes that at some point a line will have to be more clearly defined for when reposting hits the level where it constitutes its own original act of content generation, but because of the case, “we need not consider when that line is crossed.” To date, it still has not been defined.)

So the person spreading lies about you isn’t liable as long as someone else said it first, and the websites on which all of these people are posting the lies can’t be held liable either. You can only sue the original poster for defamation, and that’s that.

But surely websites are cooperative if you’ve got a real claim that someone is defaming you? Well, perhaps. Before we answer this, let’s look at the other reason a user-generated-content website would have its content filtered or removed: copyright.

The DMCA

The Digital Millenium Copyright Act (DMCA) of 1998 paved the way for copyright holders, in particular music and movie distribution companies, to strongly protect their copyright claims, even in an age when the spreading of such files on the Internet has become commonplace. If YaleLawTech Records holds the copyright for “Don’t Mess with that CSS,” a popular song that they’ve found has been illegally used as a catchy background to a YouTube video, YLT Records simply has to send a takedown notice, specified in the DMCA, to YouTube. YouTube even has a standard webform to fill out if you don’t want to go to the effort to hire a lawyer.

When it receives such a takedown notice, it is in the best interests of YouTube (and similar sites) to simply remove the allegedly copyrighting content, without any examination into the validity of the copyright claim. By removing the video, YouTube becomes immune to all copyright infringement liability according to the DMCA, without which it would be facing thousands of copyright infringement suits. If on the other hand YouTube chooses not to remove the content immediately, even if such inaction is justified, YouTube would be inviting lawsuits and thus legal fees that it could live without.

In short, if a website receives a DMCA takedown notice, it’ll take down the questionable material immediately, no questions asked, without worrying whether or not the material was actually infringing upon a copyright and the takedown notice was actually valid.

Comparing Copyright to Defamation

So a website will take down allegedly copyrighted materials immediately because of the safe harbor the DMCA grants it for doing so. Shouldn’t it do the same for defamation? Unfortunately, because of the clear protections of the CDA, most websites have very little incentive to do so. If an item defames you, the website hosting it is immune from all defamation liability and as long as it’s not actually harming business to keep the material up (like hate speech might). So what do different websites actually do?

Facebook

Copyright Infringement

On filing DMCA takedown notices, Facebook says: “We will make every effort to review your report as quickly as we can. So long as everything appears to be in order, we will promptly remove or disable access to the content. We will also notify the user and, if requested, provide your report to the user. We will terminate repeat infringers when appropriate.”

Facebook has an automated DMCA form for filing DMCA takedown notices, and additionally lists the mailing address of its DMCA Designated Agent and has other info on its copyright help page.

Defamation

Facebook’s Terms of Use state: “You will not bully, intimidate, or harass any user.” How closely this ties to defamation is unclear. Further, if the person being defamed is not him/herself a Facebook user, the only part of the Terms that relates is: “You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory,” which is weak and indirect in this case.

To report “abuse” (violations of the Terms of Use), Facebook has a “Report _____” link below every image, message, video, event, etc. Nowhere does Facebook mention how long they will take to respond to reported content, and many times they state that “reporting … content doesn’t guarantee that they or it will be removed.”

YouTube

Copyright Infringement

YouTube is famous for its quick responses to DMCA takedown notices. It has, as mentioned above, a copyright complaint webform which can be filled out in less than a minute. For mass claims, YouTube created a Content Verification Program, whereby copyright holders (read: record labels) can submit an application verifying that they are a copyright holder (which implicitly acknowledges that the standard copyright complaint webform doesn’t really check for one’s identity) and then use YouTube’s “industry-leading Content Identification and Verification Tools.” These tools essentially let copyright holders have YouTube automatically scan videos for copyrighted content and then automatically either implements predetermined actions to monetize, record tracking data on, or block those videos infringing your copyrights. With this, copyright holders no longer even need to find or alert YouTube to copyrighted content, they can simply sit back and have search spiders make them money.

Defamation

YouTube’s Community Guidelines (themselves a section of the Terms of Service) says: “Things like … harassment … are taken very seriously. Anyone caught doing these things may be permanently banned from YouTube.” No timeframe, no standards, nothing.

YouTube, like Facebook, has a system of having users “flag” content as “inappropriate” and says only that its staff reviews flagged videos (no mention of comments, etc.) “24 hours a day, seven days a week.” Great.

MySpace

Copyright Infringement

MySpace seems to be behind the times in not having any automated webform for submitting DMCA takedown notices (their fastest method is by writing your own notice and emailing it to them). For further evidence that MySpace is behind the times, note that its second method of communication is “facsimile.” MySpace does not include a timeframe for how quickly they will respond to such notices.

Defamation

Harassment and cyberbullying are against MySpace’s Terms of Use, and such activities can be reported by using MySpace’s built-in all-purpose “Contact MySpace” form and including the offending user’s “friend ID” and a screenshot of the offending behavior. Just today, MySpace uploaded a new document in its help center on harassment, directing users to either block the harassing user or, if in danger, to call 911—no longer mentioning reporting this behavior to MySpace. MySpace says, “We promise to check it out and do our best to get back to you within 48 hours,” which is the closest thing to a timeframe any of these sites have stated.

Twitter

Copyright Infringement

Twitter also has an automated DMCA takedown notice webform. It also lists an email address that can be used for such complaints in the event the webform does not work. They don’t mention a timeframe for responses.

Defamation

Twitter only responds to serious, violent threats and violations of personal privacy, period. Everything else, you should just block. No mention of timeframes for Twitter responses.

Flickr

Copyright Infringement

Flickr has, surprisingly, no clear DMCA takedown notice procedures. Instead, it has a link to a page on filing DMCA notices with Yahoo!, which lists an mailing address, phone number, fax number, and email address (in that order) to contact. No mention of a timeframe for a response.

Defamation

Flickr uses Yahoo!’s Terms of Service, which only says users agree not to “‘stalk’ or otherwise harass another” as the closest thing to defamation. Flickr has a “report abuse” button to use for reporting this behavior, with no clear timeframe for a response.

Google+

Copyright Infringement

Having no real procedure for this (as it’s not really an issue yet), it seems DMCA notices must be sent directly to another Google address. Copyright isn’t mentioned anywhere in its User Content and Conduct Policy. Based on anecdotal evidence Google has removed offending content in six days, after at least something of a conversation with the complainant (no auto-removal systems in place).

Defamation

Google+ has “Report Abuse” links for what its User Content and Conduct Policy calls “violent or bullying behavior.” Libel, per se, isn’t mentioned. No idea of a timeframe for responses

Summary

As shown in the disparities between services’ treatment of DMCA takedown notices and claims of libel/defamation, websites are much more interested in protecting copyright claims, especially those of big companies, than in protecting the average person from libel. But we can’t blame these websites; their actions are a direct reaction to the differences in liability granted by the DMCA and the CDA. If we want to see change, this needs to be reflected in our laws. Should websites more strictly police alleged libel, or does that run the risk of encouraging censorship of free speech? Should websites put more effort into discerning the validity of copyright infringement claims, or does that disincentivize progress? However we stand, if we don’t agree with the system now we must reevaluate our legal policies because the implementations reflect the laws, and the laws are clear: copyright is protected at all costs from infringement, but not the reputation of the individual from defamation.

The difference between hate and stupidity – by “Jennifer F”

Hate speech seems to be the hot button issue of the moment. From the anti-gay bullying that led to several highly publicized suicides of gay teens last month to the current DKE fiasco this may be the one topic that everyone has an opinion on. At the end of the day most of us won’t take up the cause of net neutrality or copyright law, but as members of society we have a vested interest in creating a hospitable environment for ourselves and others.  However, the use of the “hate speech” discourse hinders meaningful conversation.

As the recent DKE controversy has illustrated, misogyny and inappropriate conduct are still widespread even among we “enlightened” Yalies. Rape isn’t funny. Necrophilia is pretty messed up. But labeling the actions of the DKE pledges as “hate speech” and a “call to violence” is completely counterproductive. It prevents dialogue surrounding the specific misconduct and instead turns to character judgements of the individuals involved. The guys of DKE said stupid things. Really really stupid, offensive things. But calling it hate speech implies that these men are violent, woman-hating sexual deviants. To them, their chants were funny. To the rest of us, not so much. Rather than demonizing them as misogynists, Yale could benefit more from a dialogue and culture that makes it clear that such behavior is not amusing to the vast majority of us. “No means yes, yes means anal!” is an immature and offensive slogan, but it does not translate directly to “I hate women and advocate rape and other forms of sexual violence.” It just means that some people have really bad taste and a messed up sense of humor. Likewise, “that movie was so gay” is intended to mean it was lame, not that it had a subversive homosexual agenda and homoerotic themes which offended the homophobic sensibilities of the viewer. Is the phrasing inappropriate? Yes. Offensive? Absolutely. But painting this conduct as hate speech turns it into an all-or-nothing debate in which neither side benefits.

The derailing potential of such black and white thinking is evident in the campus response to the DKE pledge incident. DKE looked like a bunch of jerks. But then the Women’s Center declared that the behavior was violent and imminently threatening. To most Yalies (us women included), that’s just not the case. This polarization has actually hurt the dialogue by reinforcing the stereotype of the Women’s Center as somewhat radical and just a tad overly sensitive, and making us sympathize with the plight of the beleaguered frat boys. Not what the WC was going for. (Not to knock the Women’s Center, their sponsorship of a forum on the sexual climate was a positive step forward.)

The debates around hate speech tend to take the focus off of the misconduct and instead focus on the values of the individuals involved. In D.C. v. R.R., the legal proceedings focused entirely too much on the personalities of those involved. D.C. and his parents insisted he wasn’t gay. R.R. tried to defend himself by pointing out his acceptance of a gay family member. While R.R. was clearly kind of a sick kid and a total jerk to boot, as a pluralistic nation we don’t have the right to regulate his views on homosexuality. Libel, sure. Threats of physical violence, of course. But whether or not he actually has a problem with homosexuals is not and should not be relevant to the case. There will always be homophobes and misogynists and racists, no matter how much we may wish otherwise. If we want freedom of speech and religion, we’re going to have to take the flip side of the coin as well and swallow freedom to hate. But let’s not conflate insensitivity and immaturity with hate and the incitement of violence. Some people say really stupid stuff they don’t mean. And as for the ones that do mean it, well, haters gonna hate.Walk it off, little man.

GLAAD with Facebook – by “Malory W”

Last week Facebook announced new efforts to respond more quickly to hate speech that violates their terms of service. In the past couple of weeks Facebook has worked with members of GLAAD, the Gay and Lesbian Alliance Against Defamation, and other LGBT organizations to remove harmful and offensive posts that have plagued many of Facebook’s public pages. This effort comes on the heels of recent tragedies involving different forms of cyber bullying resulting in suicides such as in the case of Rutgers freshman Tyler Clementi who ended his life after his roommate used the Internet to shame and harass Clementi over his sexual preference.

GLAAD initially got involved in cleaning up Facebook after receiving complaints from various members of the gay community about a Facebook page that was established in memory of victims of anti-gay bullying that became covered with derogatory hate speech and images. The organization responded by reaching out to Facebook and starting a dialogue on effective measures to control offensive and hateful posts. Facebook responded quickly, partnering up with GLAAD to clean the site of anti-gay hate speech. Supporters of Facebooks initial steps have ranged from Jarret Barrios, President of GLAAD, down to Perez Hilton, the controversial celebrity blogger.

While the case of this memorial page to bullying victims may have been pretty clear-cut hate speech, some cases could be and probably have been proven to be more difficult to classify. Facebook has consistently said that they prohibit any kind of hateful content and that they have mechanisms in place to remove harmful posts as quickly as they can. However, they also emphasize that their users are allowed to express unpopular opinions and that there must be a careful balance between free speech and removing hateful content. It seems likely that these two would occasionally, if not frequently, come into conflict.

While these are steps in the right direction in an attempt to curb cyber bulling and harassment, I can’t help but wonder how long it would have taken for Facebook’s own “mechanisms” to target this page had a large and widely respected group such as GLAAD not stepped in? With Facebook users generating millions of new pieces of content every hour, it seems highly unlikely that Facebook can effectively monitor its content without massive help from Facebook users themselves. Thus, is there real hope for strict control over cyber bullying and Internet hate speech? Possibly. As long as Internet users themselves take an active role in reporting offensive actions. However, the conflict between defining something as free speech or hate speech will always leave some content unresolved.

Get a life, Shirvell. – by “Daniel C”

UMich recently inaugurated its first openly gay student body president, Chris Armstrong.  For the past month, Mich. Assistant Attorney General Andrew Shirvell—with no previous relation to Armstrong—has been running a highly offensive smear campaign against the student body president.  The front page of Shirvell’s blogger.com has “RESIGN” written across Armstrong’s face, accompanied by a rainbow Nazi swastika.  Additional comments refer to Armstrong as a “radical homosexual activist” and “Satan’s representative on the student council.”

Shirvell claims he has a problem with the “radical” gender-neutral housing policies that Armstrong is pushing for, while ignoring Armstrong’s other platforms to extend dining hall hours and financial aid.  He also attempts to justify his words by calling himself a rightful Christian citizen and a concerned alumnus of UMich.  Yet he does not know Armstrong personally and seems to be taking the agenda of the Michigan student government far too seriously.  He is a state official, and the policies of Armstrong have no relation to him, as he is no longer a student at UMich.  This obnoxious example of cyberbullying shows that more needs to be done to punish perpetrators like Shirvell, who overstep their boundaries to free speech online.

Other than public humiliation, little has been done to punish Shirvell.  Attorney General Mike Cox has not fired his assistant.  When asked why, Cox says that he unfortunately cannot do anything because Shirvell “writes his blog during after-hours and is protected by the First Amendment.”  If what Shirvell says is true—that he only publishes some of the information from other postings online such as from his Christian community—Shirvell is unfortunately provided some immunity by Section 230 of the Communications Decency Act.  However, Section 230 also states that it is the policy of the United States to “punish trafficking in obscenity, stalking, and harassment by means of computer.”  Shirvell is guilty of all three.  He has been seen obnoxiously screaming, “Nazi!” at Armstrong’s political campaigns and has stood outside Armstrong’s house to videotape.  Even Armstrong’s family and friends have received abusive emails and Facebook messages.

Common law also shows a precedent in McEvoy v. Spencer. McEvoy was demoted from Deputy Chief to Captain because of speech activities that were harmful to the public workplace.  In order to mitigate the negative effects to the workplace, his employers, Mayor Spencer and Deputy Chief Christopher, were justified in demoting McEvoy to Captain.  Andrew Shirvell has one of the most public positions in the state of Michigan.  If he can’t be fired, he should at least be removed from his position.

Please watch the following video to get an idea for what Andrew Shirvell is like.  An entertaining, but frustrating video interview on AC360˚:

http://www.youtube.com/watch?v=PwObjKZg9Jw&feature=player_embedded#!

Shirvell is acting like an immature teenage blogger with an advanced vocabulary.  Thankfully his blog is now private.  He has taken personal leave from office, and only faces a disciplinary hearing upon return.  But he should be fired completely.  Otherwise, maybe the government should give him some more work to do; clearly the Assistant Attorney General has too much free time on his hands.

A Bitter Juice from JuicyCampus – by “Jeanne S”

On February 4, 2009, Matt Ivester, the founder and CEO of JuicyCampus.com, announced that JuicyCampus would be shutting down due to plummeting online ad revenue. In an ensuing Q&A, he insisted that a shortage of funds was the sole reason for the shutdown; no charges were ever brought against JuicyCampus for defamation, and the site did not lose any significant revenue from being banned at a few campuses.

While JuicyCampus may never have been brought to court, it was not for lack of trying. A 3/25/08 YDN article detailed efforts by the New Jersey and Connecticut attorney generals to submit JuicyCampus to investigation under consumer fraud laws, claiming that the site misleads its users by failing to enforce its own terms and conditions. According to the attorney generals, the site claims that users may not post defamatory comments, but does not provide a mechanism for preventing them. Indeed, many hateful threads were created and sustained by Yale students about other Yale students, referencing looks, weight, sexual experience, etc. And while Dean Salovey remarked in the YDN article that he did not believe “censoring a web site [was] consistent with Yale’s free expression policies,” some of the comments were on a level of defamation similar to those detailed in Doe vs. Ciolli, and could possibly have fallen under Connecticut statutes against defamation if individuals targeted on JuicyCampus had brought their cases to court. However, the process for bringing that case to court in the first place would have been painfully difficult, if not impossible, given JuicyCampus’ protection against liability and refusal to give up identities of anonymous posters without a subpoena.

I will admit that in my hours of procrastination, I occasionally clicked around on JuicyCampus. At its best, it was entertaining, with threads like “hottest guy on campus,” “worst hookup story,” etc. But at worst, I came across a thread with the name of one of my close friends in the subject line, and some of the comments weren’t pretty (or remotely true). I didn’t tell her about the thread, and I’m not sure she ever knew. But this friend of mine is applying for jobs right now, and if that thread were still in existence on JuicyCampus, she might have suffered irreparable damage in her job hunt because of some bored kid who had nothing better to do than post hateful, anonymous comments about her — comments which JuicyCampus made no effort to monitor, despite its promises in its terms of service.

JuicyCampus has since segued into CollegeACB, which claims to host a “higher level of discourse” than JuicyCampus. It also employs a user-moderation button, allowing users themselves to report inappropriate posts to the webmaster, rather than forcing the webmaster to serve as gatekeeper himself. It also requires you to create an account with your .edu email address. Perhaps luckily, CollegeACB never caught on at Yale the way JuicyCampus did. But on the flip side, the lack of Yale users means that the user-moderation button is effectively useless. As I scrolled down the list of threads just now, someone initiated a post on 9/1/10 titled “N***ers” that read “Stop coming to Yale. You are ruining this school.” I tried to report the thread, but I was asked to create an account first, which I have zero interest in doing. Has CollegeACB risen above its predecessor? I think not.

Who’s Responsible for Harassment? – by “Shirley B”

Harassment is no new phenomenon on the Internet.  As astutely stated by Penny Arcade, normal person + anonymity + audience = Total ****wad. (Don’t click if cursing offends!)  But for websites that present a platform for user-generated content, harassment has presented interesting new problems.  Are these websites liable for user content?  Are they simply providers of the platform on which users place content or are they themselves content providers?

Two years ago a Twitter user was harassing a woman named Ariel Waldman over the social networking website.  After submitting a complaint to Twitter, an administrator removed the tweets from the public time line.  However, when Waldman asked that the user be banned, she received a reply that the comments were not illegal, simply malicious, despite the fact that Twitter explicitly lists harassment as a violation of the Terms of Service, and reserve their right to remove content and users in violation of their Terms of Service (though are not obligated to).  Twitter promised to revise its Terms of Service to clarify its enforcement policies, but would do nothing further, for fear of a lawsuit.  Jason Goldman, the product manager for Twitter, stated in a post at getsatisfaction.com that “As a communication utility, Twitter does not get involved in disputes between users over issues of content except in very specific situations. Twitter is a provider of information, not a mediator.”  In the same post, Goldman also stated that the company was working on improving its blocking feature, which could solve the problem of harassment.  Problem solved.  The issue was only with Twitter’s ToS, its enforcement of it, and their insufficient blocking tool.  Not quite.

John Dozier Jr., an expert in Internet law, says that Twitter may have surrendered its safe harbor under the Communications Decency Act when it edited the user’s page, which it did when it removed it from the public feed.  “If they’ve edited content based on their subjective perspective, they put their immunity at risk and virtually their entire online business, because then they’d be liable to defamation claims or anything else that a publisher would,” Dozier says.  What if a website had knowledge of the harassment – had seen it, had watched it continue, and were fully aware of the problems it was causing – but did not take any action?  Could this lack of action, this implicit approval (or at least lack of disapproval) of the content entangle them in the same liabilities that publishers face?  Madeline Rodriguez takes this a step further in her article “Reexamining Section 230 of the CDA and Online Anonymous Speech: Defamation on the Internet and the Websites That Facilitate It.”  She says, “if an ICP [internet content provider], and certainly an ICF [internet content facilitator], has reason to know, or anticipate, that at least some of the postings on their website are defamatory, false, anonymous, annoying, or harassing in nature, then they will be considered a co-author of the defamatory, false, anonymous, annoying, or harassing postings, and just as liable as the original poster, losing their CDA 230 immunity completely, and be subject to full monetary, legal, and equitable damages by an aggrieved party.”  This seems to suggest that Twitter is co-author of the harassing posts, and as a creator, could be at risk of a lawsuit.

Malcolm Coles, a UK blogger who believes that Italy made the right decision in deciding against Google, argues that these Google executives are absolutely responsible for the content published on their website, because they are responsible for the systems that police that content.  While the problem that Madeline Rodriguez presents only seems relevant to cases in which the website owners have actual knowledge of illegal activity, Coles suggests that Google indeed should be able to have actual knowledge of every incidence of illegal activity on their website.  He blames Google for not having an adequate policing system, since the video was taken down two months after it was posted (though only two hours after an official request by the Italian police).  Coles claims that Google facilitated the posting of the content through its negligence, and that it “has ongoing control over that platform” which involves it in content publishing in a way it does not a “camera-maker or tissue-box manufacturer.”

Harassment presents a new problem for user-driven content websites.  It is difficult to find, unlike copyright infringements, often ambiguous, and is obviously a sensitive subject. It also involves publishing original opinions and user-generated content in a way that infringement does not.  While I certainly don’t think that websites such as Google, YouTube, Twitter, or the New York Times (for its comments sections) should be held liable for what others post, it is interesting to look at the fine line between facilitator and provider that these sites are walking in the case of harassment.