As a girl who some may call “ginger,” with hair that looks occasionally red in certain lighting, I vividly remember the origins of “Kick a Ginger Day.” Thankfully no one kicked me, likely because of what I like to call the Gendered Ginger Effect (see Exhibit 1).
The anti-ginger sentiment can be traced to the 136th episode of my favorite television show of all time, South Park. Aired on November 9, 2005, the episode, titled simply “Ginger Kids,” depicts Eric Cartman leading a Nazi-like campaign against pale, freckly redheaded kids, arguing that gingers are inhuman and soulless.
South Park‘s devoted fan base typically ranges from high minded intellectuals laughing at social commentary (I like to think this is me) to mindless high schoolers laughing at fart jokes. Interestingly, as a precocious 14-year-old, I thought the episode was about how a charismatic leader with violent ambitions can influence the masses. Instead, the more juvenile fan base seriously latched onto the anti-ginger frenzy. Over the years this spawned a series of Facebook event pages, leading to the naming of November 20, 2008 as “National Kick A Ginger Day.” By that date, more than 5,000 people internationally had joined “Kick a Ginger Day.”
The participation in “Kick a Ginger Day” didn’t end on a Facebook wall. On November 20, 2008, redheaded kids in the United States, Canada, and the United Kingdom faced taunts, torments, and violence from their fellow classmates at school. One high school in Vancouver suspended 20 kids for beating up their ginger classmates. School administrators were baffled–how could an online group spur such violence? Did assaults on gingers, driven by “Kick a Ginger Day,” amount to hate crimes?
The “Kick a Ginger Day” event was created by a 14-year-old Canadian boy who faced an investigation for provoking what officials believed to be a rash of hate crimes across national boundaries. The boy has since apologized, and the page is no longer on Facebook. Ginger kicking did not end in 2008, however, and assaults have continued annually on November 20th. An entire website, www.kick-a-ginger-day.com, is devoted to the ongoing “celebration” of kicking gingers.
A now-infamous 2010 YouTube video, posted by a frustrated redheaded boy, further fueled online ginger hatred.
Ginger-bashing online escalated into quite literal ginger-bashing in the real world. What started as a comical commentary on racism in the fictional world of South Park evolved into the beating and bullying of real high school kids through the dissemination of hatred via social media. This is not quite cyber-bullying: these ginger kids were facing the brunt of the beatings in school hallways, not in online messages. This would be difficult to bring to trial as an instance of defamation, since no particular redheaded individuals were targeted in the online speech against gingers. This is, however, certainly hate speech: it legitimizes violence against a group of people. Free speech online does not equate to the free battery of ginger kids.
The border-blurring brought on by the Internet must just be driving oppressive regimes nuts. How are you supposed to control what information people get their hands on when it’s coming from the other side of the globe at the speed of light from people beyond the reach of your thugs and laws? Well, many such regimes have adopted the tactics of similarly-minded paranoid conservative parents who don’t know what to make of the Internet. If the source is beyond their control, they can at least attempt to block it at the point of entry.
This puts the foreign companies providing the content in a bit of a pickle. They don’t want to lose their market share in the country in question, but they (hopefully) don’t want to facilitate oppression either. Or, they don’t want to look like they’re facilitating oppression. In fact, foreign companies are in a better position than citizens of the country in question, since they’re able to use their economic clout to influence policies without the same risks and restrictions that domestic actors face. So, striking a balance between these concerns is of great importance to the success and reputation of the company as well as the human rights situation in the oppressive country. Here are some of the options foreign companies have:
Cooperate & Facilitate
Do whatever the oppressive government wants you to. Stop doing things they want you to stop, and give them the information they demand.
Pros: You get to continue operating in the country. Market share and profit and stuff.
Cons: You’re doing evil, and everyone will hate you for it. You could also get in legal trouble in the US.
Example: Yahoo!, China, 2004. Pretty much the worst possible way to handle this sort of situation. In 2004, the Chinese government released a document warning journalists about reporting on sensitive topics because of the looming 15th anniversary of the 1989 Tiananmen Square Protests. Journalist Shi Tao sent a brief of this document to the Asia Democracy Foundation via his Yahoo! e-mail account. The Chinese government found out and demanded Yahoo! hand over information about the sender. Yahoo! did it without even asking what it was for. As a result, Shi Tao was sentenced to ten years in prison. Yahoo! was criticized by everyhumanrightsorganization in the book. Congress investigated the incident, and later reprimanded Yahoo! for not giving full details to them regarding the incident. Rep. Tom Lantos (D-CA) told Yahoo! founder Jerry Yang, “While technologically and financially you are giants, morally you are pygmies.” Yahoo! was sued in the US on behalf of Shi Tao and another journalist, and they settled out of court for an undisclosed sum. There still exists a campaign to boycott Yahoo! because of this, and I still refrain from using Yahoo! services. Oh, did I mention they did the same thing two years earlier, resulting in another ten year prison sentence for journalist Wang Xiaoning? And were complicit in helping to convict Li Zhi and Jiang Lijun, two other government critics?
Example: SmartFilter, Middle East. McAfee’s SmartFilter software has been used by governments in Tunisia, Sudan, Oman, the UAE, Kuwait, Bahrain, and Saudi Arabia to block certain Internet content from reaching users. They make no effort to prevent or prohibit governments from using this software, which is allegedly aimed at homes and schools. The software includes a database of more than 25 million blockable websites in various categories. Such filtering databases as well as selective algorithms have been shown time and again to be massively flawed in the categories they attribute to various websites. But, instead of simply inconveniencing a student who wants to research safe sex, AIDS, or religious tolerance (God forbid), it alters the information that can make it to an entire country of Internet users. The OpenNet Initiative also accused Iran of using SmartFilter, though the US’s embargo against Iran would prohibit the sale or licensing of this software to Iran. The company has said that Iran pirated their software. Some say Iran now has its own censorship software. While McAfee doesn’t market their software to oppressive regimes or for the purpose of mass censorship, some selectivity in who they license their software to or the scale at which they allow it to be implemented wouldn’t be a bad idea. It wouldn’t stop governments from pirating it, but at least it would help McAfee from appearing complicit in censorship.
Unfortunately, there are way more examples of this response than any of the responses below.
Set a limit to your capitulation while acknowledging the authority of the host government as set out by its laws.
Pros: You might get to continue operating in the country without giving in entirely. You would also help make it clear that there is a limit to what governments can force foreign Internet companies to do.
Cons: The government might still prevent you from operating there. You might not get the benefit of being seen as standing up to oppression.
Example: YouTube, Turkey, 2007. The Turkish government mandated that Turkish telecom providers block access to YouTube because it hosted some videos that were said to insult Mustafa Kemal Atatürk. Nicole Wong, deputy general counsel of Google, which owns YouTube, decided that Google would block Turkish IP addresses from accessing videos that clearly violated Turkish law. Later, though, a Turkish prosecutor demanded that Google block users anywhere in the world from accessing such videos. This is where Google drew the line, and they refused to capitulate to the unreasonable request. YouTube remained blocked in Turkey until 2010 when Turkey’s Transport Minister, in charge of Internet issues, lifted the ban, proclaiming that “common sense prevailed”. So, despite the dismay and limited success of the conservative elements that demanded the ban, internal pressure and the realization of YouTube’s importance prevailed.
Move Services Out of the Offending Country
The more of a company’s operations that physically take place within the offending country, the more power the government can assert over the company. Partnering with local firms presents similar problems. Locating data storage in particular outside of the country allows in-country users to move their data farther from the reach of their government. There are few examples of companies making this kind of drastic business change, but the choices companies make before starting business in other countries affect their relationship to freedom of speech controversies in the future. For example, Google and Microsoft don’t partner with Chinese companies (though they have their own workers in China), whereas Skype and Yahoo do, and the latter companies have lost much more face in controversies surrounding censorship in China.
Pros: It’s likely that the offending country’s government will block your services anyways, but at least the option is there should they choose to unblock them in the future. There’s also the advantage of preserving your reputation and being seen as not doing evil.
Cons: Your services might very well get blocked. Your local workers or former local workers could face trouble.
Example: Google, China, 2010. When Google discovered hacking attempts targeted at the Gmail accounts of Chinese human rights activists, which would put those activists in great danger, they reacted harshly. They announced that they would stop censoring search results on Google.cn, which they had previously agreed to do in order to be allowed to start operations in China. They even went so far as to say that they would shut down their operations in China entirely if the government continued causing problems. While Hong Kong is technically part of the People’s Republic of China, it operates under radically different laws regarding freedom of speech. As is often the case with China’s Internet blocking, the accessibility of Google.cn varies by time and location.
Shut Down Services
No longer offer your services to the offending country and its Internet users.
Pros: You stand your ground, and the offending government will (well…might) think twice before they try to muscle a foreign company again.
Cons: You’re no longer in that country’s market. Whatever limited information or services you were able to provide or would be able to provide are no longer available to users in that country. Your local workers or former local workers could face trouble.
Example: Websense, Yemen, 2009. Websense, like SmartFilter, is web filtering software similar to SmartFilter. Like SmartFilter, it is not intended or marketed to be a tool for government censorship. Actually, it was what my high school used to ban naughty (and not so naughty) things. But, unlike SmartFilter, Websense has an explicit anti-censorship policy under which it “does not sell to governments or Internet Service Providers (ISPs) that are engaged in government-imposed censorship”. When Websense discovered that the Yemeni ISPs were using their software to implement government-imposed mass censorship, they prohibited Yemeni ISPs from accessing updates to their software.
Ignore the Government
There are a lot of services that presumably carry content that oppressive governments wish to block and have probably requested to have taken down, but controversy rarely arises when companies just ignore those requests. It may be useful to be linked to free speech and democracy movements, as is the case with Twitter. Some users will undoubtably find a way to access your website, and it will be much more valuable to them if, when they get there, there is freedom of speech.
Pros: Like the previous several options, you get some good karma by not giving in to an oppressive government. You remain in control of your content. By not engaging the government, the issue may not go any further, and the government may not end up enraged and looking for a way to get revenge or assert its power.
Cons: You may get blocked. You may get in legal trouble if you ignore government requests.
Example: Twitter. Twitter’s strategy is not even engaging with oppressive governments about getting their website unblocked. They focus more on working on developing ways to circumvent censorship. As Twitter CEO Evan Williams put it, “The most productive way to fight that is not by trying to engage China and other governments whose very being is against what we are about.” By continuing to host politically controversial content, Twitter has become central to many opposition movements. Even though it is at least partially blocked in Iran, many Iranian dissidents communicate using Twitter, and a lot of information makes it out of Iran via Twitter.
I shouldn’t need to explain why it’s bad to help government oppress their citizens. So I won’t. But all too often, the moral repercussions of business decisions like these get looked over because they don’t have overt monetary value. But it’s inextricably linked to reputation, which is inextricably linked to success. Part of Google’s success is that it is seen as not doing evil. In a world where people are increasingly wary of big corporations (see: all those “Occupy” movements right now), it’s important that a company be seen as a friend, not an enemy.
The main issue with the myriad of user generated content sites(Youtube, Vimeo, etc) are that they offer a highly efficient network for the use and spread of copyright infringing content, obviously. The entire Viacom vs. Youtube lawsuit is based upon Viacom’s belief that Youtube, as a business, benefits from the availability and use of infringing content on the website. Viacom is pursuing Youtube on the basis that they are failing to adhere to the criteria for protecting under Act § 512, complaining that Youtube is neither “act[ing] expeditiously to remove, or disable access to, the material” nor “not receiving a financial benefit directly attributable to the infringing activity”. Viacom claims that Youtube has not put enough effort and technology into making sure that unlicensed copyrighted content is removed from the site. That is why, in the midst of a legal battle, the timing of Youtube’s revelation of the beta version of their new Video and Audio Identification software is so beautifully bratty.
Like a stubborn middle schooler, who only wants to do it their way, Youtube, after being sued, is taking a step toward efficiently handling infringing content. However, they want to make it know that they don’t have to do it. In a recent blog post, Youtube product manager, David King stated that the new Video and Audio Identification “goes above and beyond our legal responsibilities”. But like a stubborn middle schooler, Youtube’s creation may prove to further the reach of the copyright “take-down” abusers, they are currently fighting in court.
Youtube’s Video Identification uses copyright owner submitted material as a template for an automated system which matches the audio and/or video to user uploaded content. When matches are found, copyright owners have the option to block that content, track that content, or monetize the content. Youtube has essentially given greedy copyright owners an easy system to enact frivolous takedown’s on any videos that have any semblance to their copyrighted content. The idea of the software is justifiable and obviously the monetary motivation is at the heart of the system., Video Identification allows for copyright owners to, in effect, license their content to Youtube for monetary return. The system also allows Youtube to take a backseat to takedown claims, giving full onus to the system and the copyright owner. However this passive stance provides the opportunity for the unbridled abuse of the system. With what seems like little room for appeal from the user, the software deals a serious blow to any petitions of fair use. At the heart of the Lenz v. Universal case was how Universal was not acting in “good faith” and did not consider the application of fair use when it issued a takedown notice for her “use” of Prince’s “Lets go crazy”. This new software may take takedowns to an even further level. Imagine the potential of this software. What if a video is posted of a car ride conversation, which happens to involve the background car radio playing a famous song. This software potentially has the power to take match that background song, which has nothing to do with the video itself, and issue a takedown at the will of the copyright holder. Takedowns could begin to be issued even more frivolously than ever before, all while allowing the full burden of “good faith” and fair use consideration to rest on the shoulders of Youtube and its new software.
It will be interesting to see how Youtube chooses to implement this new technology. According to its website, this new software will make money, increase fan interaction, reduce infringement, and provide market data. With the rapid increase in ads, iTunes plugs, increase in competition, and a deal with Universal resulting in the creation of Vevo (Youtube’s music video channel), hopefully money making is still second to the users on Youtube’s priority list, but if not this may be the end of Youtube’s reputation as a place to “Broadcast Yourself”, hurting itself in the long run.
Although the public has been denied access to negotiations (note: RIAA and MPAA don’t count as ‘the public’), drafts of the ACTA’s internet enforcement section leaked online last week, providing a chilling glimpse into the covert negotiations among world leaders and the possible future of global internet policing. Introduced by the US Administration in 2007, the Anti-Counterfeiting Trade Agreement (ACTA) proposes making ISPs liable for content that subscribers transfer using their networks, forcing ISPs to “operate “automatic technical processes” to detect copyright-infringing activities.” While section 512 of the DMCA already establishes third party liability in the US, the ACTA extends the liability of intermediaries beyond notice-and-takedown to possible Deep Packet Inspection (bye bye net neutrality?) to the contentious three-strikes rule (which France passed in 2009, banning three-time accused file-sharers from the internet), thus exacerbating and spreading shortcomings of the DMCA internationally – namely the E.U., Canada, Mexico, Australia, New Zealand, South Korea, Singapore, Jordan, Morocco and the United Arab Emirates.
The current digital gatekeeping model established by the DMCA promotes a ‘shoot now, ask questions later’ approach to handling allegations of copyright infringement. While the DMCA imposes the burden of proof on copyright holders and outlines the necessary elements to a notification of copyright infringement, the court found in ALS Scan, Inc. v. Remarq Communities, Inc. that copyright owners do not have to identify all infringing material (“imperfect notice”), thus shifting this responsibility to service providers.Although the complaint of infringement does not prove that infringement took place, the DMCA allows ISPs to takedown content without investigating whether the material was truly infringing before taking it down, thereby shifting the burden of proof onto subscribers.
The only recourse that subscribers have is filing a counter-notice of a “good faith belief that the material was removed or disabled as a result of mistake or misidentification…” Intermediaries risk losing their safe harbor if they do not take down infringing works whereas the consequences of wrongful takedowns amount to little more than disgruntled bloggers. In practice, there is no enforcement of fair use considerations prior to takedowns; section 512 of the DMCA explicitly imposes burden of proof on copyright holders, but third party provisions ultimately shifts the burden to subscribers.
Need to brush up on copyright law? Check out this video before it’s taken down…
As the amount of information online far exceeds copyright holders’ ability to monitor the dissemination of their works, they are becoming increasingly reliant on targeting internet hubs managed by intermediaries. In turn, intermediaries like YouTube are going “well above and beyond our legal responsibilities” by turning to automated technologies to keep up with this inundation of user-generated and uploaded content; its ContentID system enables copyright owners to automatically identify their works in YouTube hosted videos, and subsequently monetize, track or block the content. Technology that automatically screens for the presence of copyrighted material inevitably steamrolls over fair uses in favor of ease for copyright holders and intermediaries. As a standard rather than rule, fair use cannot be identified by an algorithm (real lawyers have enough trouble accomplishing that as it is) and technologies such as ContentID will further contribute to mass takedowns, leaving subscribers guilty until they prove themselves innocent.