Wikipedia and Network Effects – by “Michael H – YLT2012”

Who among us hasn’t observed a teacher sneer at the thought of a student referencing Wikipedia over traditional, non-digital sources? These teachers laud the immutability and consistency of books over time; however, this great strength of books can also be a significant drawback: they lack the generativity and adaptability to incorporate changing historiographical opinions, cutting-edge scientific research, and innovative discoveries that a more flexible medium provides. Indeed, while a biology textbook from the 1940’s or an NBA records almanac from the 1980’s is certainly “consistent,” each fails to incorporate new information as it becomes available.

Generativity and informational accuracy don’t have to be mutually exclusive though. Indeed, a study by Nature in 2005 found that in a representative set of 42 scientific articles Wikipedia contained 162 factual errors or misleading remarks, while Encyclopedia Britannica contained 123. (1) To realize just how remarkable it is that a website that relies on a decentralized, peer-production process could rival an information source with 100 paid, full-time editors and 4,400 contributors, it is necessary to look at the underlying framework of Wikipedia. (2)

Background

Using money earned from his humble beginnings in “erotic photography,” Jimbo Wales sought to create a free, online encyclopedia. In 2000 he conceived of Nupedia, which in the vein of traditional encyclopedias hired experts to write articles. Over the course of 3 years, Nupedia managed to churn out 25 articles. At this juncture, Jimbo Wales sought relief in Postel’s Law (“Be conservative in what you do; be liberal in what you accept from others”) and created a revamped version of Nupedia called Wikipedia, which allowed the general public to create and edit articles using wiki software. The rest is history. Today, Wikipedia contains 23 million articles, spans 285 languages, and appeals to 365 million readers around the globe. Currently, Wikipedia is the most widely used general reference website with 2.7 billion page views monthly. (3) The triumph of Wikipedia over traditional, pay-for-use encyclopedias can be partly attributed to Gresham’s law, which can summarized colloquially as cheap and convenient drives out expensive and high quality.

Wikipedia Model

Encouragement that the Wikipedia model—a model that relies on the collective wisdom of a large number of unpaid volunteers—could be viable was provided by the NASA ClickWorkers experiment, which ran from November 2000 to September 2001. In the experiment by NASA, unpaid volunteers visited NASA’s website to mark and classify craters and “honeycomb” terrain on Mars. (4) The study produced two surprising and interesting results. First, people are willing to engage in an unpaid, novel, and productive experience merely for the fun of it.  Second, an amalgamation of data contributed by many unskilled volunteers can be virtually indistinguishable from the output of a trained worker. Thus, large groups of people are capable of producing high-quality work for free.

funny gifs

A Counterintuitive Proposition

It seems hard to fathom that a website that allows users cloaked in a veil of anonymity to edit the content of articles could rival the quality of Encyclopedia Britannica.  In an attempt to understand the success of Wikipedia, it is interesting to observe a city in the Netherlands, Drachten. The city has chosen to forgo basic traffic regulations in an attempt to increase safety on the roads. The experiment in Drachten initially has shown promise. Some attribute this to the difference between the effects of rules and standards. While a rule is a regulation that stipulates precise boundaries and is either followed or broken, a standard is more ambiguous and up to interpretation, calling for people to exercise sound judgment. While people might try to circumvent rules that they perceive to be imposed by arbitrary, external forces, they can become more considerate of others when their personal judgment is called upon. As a result, relaxing rules can have the paradoxical effect of causing people to adhere to the desired behavior more closely. (5)

Putting It All Together

So what do NASA and traffic regulations in the Netherlands have to do with Wikipedia, you might ask? These two anecdotes lend credence to the basic assumptions of the Wikipedia model—that the general public is capable of yielding nearly scholarly work with minimal regulation. While the notion of many small contributions forming a remarkable finished product seems strange with respect to encyclopedia articles, consider the analogy of evolution: slight genetic mutations over time in individual agents within a population lead to the betterment of the species as a whole.  A similar model is used in scientific research: major breakthroughs rest on the small contributions of many scientists. While this model may seem strange for information compilation, it is certainly not novel.

The Good, the Bad, and the Ugly

It is unsurprising that many of the flaws that arise concerning Wikipedia are quickly ameliorated; indeed, Wikipedia relies on the procrastination principle—rather than trying to forecast potential problems, it waits for a particular problem to arise and then fixes it. For example, immediately following initial reports of Michael Jackson’s death, “edit wars” ensued on Wikipedia regarding the veracity of these claims. In response to such edit wars, Wikipedia adopted the three-revert rule, which stipulates that an editor should not make the same correction to an article more than three times in one day. Another example of Wikipedia’s remarkable ability to adapt lies in its response to criticism by a former editor-in-chief of Encyclopedia Britannica, Robert McHenry. When McHenry pointed out that Wikipedia failed to note the ambiguity associated with Alexander Hamilton’s birth year, a mistake of which Columbia and Encarta were also guilty, users on Wikipedia corrected the error in under a week, a testament to how dynamic the website can be. These are just a couple of the controversies that Wikipedia has responded to effectively and expediently. (For more see Essjay Controversy and Wikipedia Biography Controversy)

My Take

When passing judgment on Wikipedia, I think it is important for us to view it in its proper context. Wikipedia is not meant to be a compilation of flawlessly written, perfectly crafted articles. When such a high threshold for quality is set for content, a bottleneck ensues, leading to an inability to cover certain relevant topics of interest. The three pillars that make Wikipedia so desirable—it’s free, convenient, and unparalleled in the breadth of its information—necessarily lead to a softening of stringent requirements for content quality and review. (You can’t have your cake and eat it too…) As an anecdote in support of the incredible amount of interconnected information on Wikipedia, consider a game that I’m sure most people are familiar with: given topic X and topic Y, start at topic X on Wikipedia and get to a page about topic Y in Z clicks or less. As an example, starting at Harvard Law School I was able to get to Lady Gaga in 4 clicks. (Harvard Law School-> United States->American music->American pop music-> Lady Gaga. Can you beat me?)

I do not understand Wikipedia “hata’s.” I think it is a losing battle to try to argue that due to a small number of factual errors (3.86 per article as determined by Nature), (1) Wikipedia  is completely without redeeming value. At a bare minimum, I think one must concede that Wikipedia is beneficial for obtaining background information on a topic. To return to my initial anecdote, this rationale should at least preclude a teacher from scoffing at a student who includes Wikipedia in his or her works cited page. (Note that I have almost exclusively adhered to citing Wikipedia articles for this blog post.) If you are personally unsatisfied with the content of Wikipedia articles, you can ignore them entirely, contribute towards improving the articles, or pursue litigation against Wikipedia (although you almost certainly will be unsuccessful…).

Personally, one of my favorite qualities of Wikipedia is that it provides a consistent format across articles that are (at least to a degree) targeted towards the general public. As a student interested in technology and the natural sciences, I often have to read about scientific discoveries that occurred in the last couple of years: frequently, I only have two sources to turn to: the original research paper and Wikipedia (a testament to Wikipedia’s generativity). Bearing in mind the complexity of the topics, I seek to wrap my brain around the concepts by skimming Wikipedia before delving into the highly esoteric research papers. I believe that using Wikipedia in this manner is an appropriate use of the website. While many people possess a take it or leave it mentality when it comes to Wikipedia, I believe that it is important to apply basic common sense and reasoning when deciding whether to use the website—if you can tolerate 3.86 errors in your reading on the Heisenberg Uncertainty Principle, then have it; if not, put your laptop up and embark in the direction of the nearest university library.

(1) http://news.cnet.com/2100-1038_3-5997332.html
(2) http://en.wikipedia.org/wiki/Encyclop%C3%A6dia_Britannica
(3) http://en.wikipedia.org/wiki/Wikipedia
(4) http://en.wikipedia.org/wiki/Clickworkers
(5) http://futureoftheinternet.org/static/ZittrainTheFutureoftheInternet.pdf

 

The Evolution of Wikipedia – by “Anthony F – YLT2012”

At the beginning of my senior year of high school, something amazing happened: San Diego Unified School District (SDUSD) finally lifted its years-long block of Wikipedia from its Internet networks. Ever since I was in sixth grade, teachers used to tell my classmates and me about the terrors of Wikipedia, saying it is unreliable and characteristic of bad research.

SDUSD’s policy on Wikipedia, 2001-2010

To SDUSD’s credit, Wikipedia seemed like a pretty sketchy idea back then. The concept of a free-to-access encyclopedia that anyone can edit understandably made administrators wary. The risk of students being misinformed by Wikipedia seemed very high, and so we were taught to avoid the site at all costs and Wikipedia was blocked from school servers. And given some of Wikipedia’s early blunders, like the Seigenthaler incident in which a journalist was inappropriately labeled as a suspect in the murder of JFK, these concerns were not unwarranted.

Of course, this didn’t stop us from using Wikipedia. It was so much easier to just read the extensive and highly informative Wikipedia page on the American Revolution (or any other topic) than it was to peruse hundreds of links on Google. And it was easy to trick your teacher into thinking you didn’t do so thanks to the fantastic “References” and “External links” sections of every article.

Basic strategy for fooling your teacher

It wasn’t until Wikipedia’s 9th year that SDUSD lifted its ban on the website. The response by most students (and teachers too) was “It’s about time.” By 2010, everyone was using Wikipedia for research, whether they admitted it or not. It was easy, informative, and, with time, it was becoming much more reliable. As Wikipedia grew into an increasingly larger community of editors and volunteers, and as its leaders implemented new policies to assure accurate information, incidences of misinformation became exceedingly rare. Vandalism is now practically a non-issue on the site. Try messing around with a Wikipedia article right now. I guarantee it will be fixed within five minutes.

Of course, this doesn’t mean Wikipedia is now flawless, nor will it ever be. No encyclopedia is without errors, and we shouldn’t expect Wikipedia to be either. Temporary issues arise now and again. There are still probably hundreds of Wikipedia articles with inaccuracies that editors won’t pick up on.

The secret to Wikipedia’s popularity

I once met a graduate student who TAs a course in African history at University of California, Berkeley. She was telling me about how she instructs her students never to use Wikipedia as a resource when writing a research paper. She told me that every year, a specific research topic is assigned, and the Wikipedia article on this particular topic happens to have a factual error in it (I believe the error is the date of a battle). And every year, she uses this error to find out which of her students did not heed her advice.

In response, I asked this person why she hasn’t corrected the Wikipedia article. She didn’t know how to reply; clearly, she wanted to continue to exploit the error to uncover the Wikipedia users. I think this example illustrates exactly what is most wrong and what is most amazing about Wikipedia.

In my opinion, this person is as backwards-thinking as SDUSD’s old policy. Wikipedia thrives on its community of users, and it depends on these users to correct inaccuracies. When individuals choose not to do so, the community suffers and that is Wikipedia’s biggest problem.

Yet, at the same time, giving that editing power to the community makes Wikipedia the most powerful and dynamic encyclopedia in human history. When Denis Diderot began compiling the Encyclopédie with Jean le Rond d’Alembert in the 18th century, one of his main goals was to compile the works of many different philosophers and writers in an effort “to change the way people think.” In that regard, Jimmy Wales is the modern-day Diderot, compiling the knowledge of millions of individuals into one of the most extensive projects in recent years. But that project’s success depends entirely on whether we, the community, choose to accept it and allow it to advance.

Odi et Amo: Networked Collaboration – by “Varoon B – YLT2012”

Before the internet was the highly sophisticated, well-structured web of everything that we know it as today, the top ten search results for “Adolf Hitler” returned everything from Hitler’s biography to kitlers, kittens that look like Hitler.  No joke.

As the internet developed, the web—and all the information it contained—was structured.  As the web grew, it became an increasingly attractive resource for people, so they began using the internet.  And then more and more followed suit.  And finally, even those people who used to hate on the internet joined the internet bandwagon.  This phenomenon is described by Metcalf’s Law, named after a brainy hotshot who co-invented the Ethernet (but who also got his PhD at Cambridge Community College).  The idea behind the law is simple.  It basically states that the value of a network increases (really quickly) as the number of users in the network increases.  We can all relate to this trend.  After my friend Florian had to go back to Germany after studying abroad at my high school, he told me to get Skype.  And then my friend George told me that he had Skype, as did my other friend Danielle.  Downloading Skype allowed not only me to contact Florian, but also Florian to contact George and Danielle, and George to contact Florian and me, and Danielle to contact Florian, George, and me, etc.  You get the idea.  The value of the network grows—order n log n or n2as the number of users does.

Before you dismiss this as some esoteric mathematical phenomenon, it might help to remember that this idea is related to a mind-blowing experiment conducted in the Netherlands.  The city of Drachten, with a population of 45,000 people, is verkeersbordvrij—free of all traffic signs.

If you’ve ever been to India and witnessed first-hand the anxiety that drivers there are subjected to in spite of all the road traffic signs, you may wonder what could have possessed anyone to propose something so radical.

But after two years of observing the unusual policy, the city witnessed a considerable decrease in accidents, and other cities in Europe began adopting similar policies. Perhaps surprisingly, the lack of strict, formal laws didn’t result in complete anarchy or dystopia.  The take-home lesson from Dracthen is that sometimes, even in unexpected contexts, standards are more effective than rules; given how networks—whether road maps or social networks—grow so quickly in value, this observation is particularly salient when constructing the frameworks upon which we build networks like the internet.  Instead of feeling burdened with tons of laws to abide by, people can respect each other’s welfare more effectively if they are liberated from them. If people feel like they are part of a social group—they’ve got your back, you’ve got their back—the Internet Gods do their magic, and things just click.

These occurrences are particularly pronounced in peer production (think free software), which consists of three basic steps: producing, accrediting, and distributing content.  NASA Clickworkers, a project that basically distributed and crowd-sourced scientific tasks, demonstrated that the web maintains an altruistic, consensus-esque culture.  So many people were willing to devote their time and energy to things that didn’t directly benefit them (or at least, not monetarily) that together, their combined computing power surpassed that of the world’s fastest supercomputer.  Dang.  (Sidenote: Check out more distributed computing projects here.  Some of them, like RPI’s project to construct a 3-D image of the Milky Way galaxy, are really cool.)

NASA's Clickworkers project asked volunteers (instead of graduate students and scientists) with computers to demarcate craters on Mars.

Next, our peers have also seamlessly integrated the process of establishing relevance and accreditation into our virtual worlds.  I have yet to purchase an item from Amazon without having access to plenty of customer reviews (of both the product and the shipper if I’m buying a used book).  Amazon also includes that handy “customers who bought items you recently viewed also bought these items” bit that always tempts me into buying more stuff.  All of these services are ways of establishing relevance and accreditation.  The “related items” pitch by Amazon teases you with stuff that is almost always relevant or related to the thing you’re searching for or interested in, and all the customer reviews help establish the legitimacy of the product you’re thinking about purchasing.   These services have been integrated into the internet in more subtle ways, too.  Google’s PageRank algorithm (named after Larry Page, FYI) does this.  Pages that are linked to more frequently among more popular sites are prioritized in Google searches.  Thus, these links embedded within sites are a form of establishing relevance and accreditation.  Good websites will be linked to by other good websites more often, thus constructing a kind of peer-to-peer relationship among the sites we find on Google.

The final step of peer production is distribution, which speaks for itself, though it is worth noting that distribution is cheap online.  Together, they all form a powerful combination.  Slashdot, Reddit, and Yelp all do these things in one form or another.  And so does Wikipedia, the king of online peer production.

Needless to say, Wikipedia is pretty darn awesome.  It’s grounded in a spirit of reporting in a neutral point of view, not conducting original research, using verifiable sources, and assuming good faith.  You don’t need me to praise Wikipedia for you to appreciate it.  We’ve all used it, and we will most likely continue to do so.

As a loyal consumer of Wikipedia, I will defend it to great lengths.  I also religiously consult Yelp every time I eat out.  However, I do think there are some drawbacks to commons peer production—or rather, to its potential consequences.  True, even though peer produced projects like Wikipedia have been found to about as inaccurate as Encyclopedia Britannica, it could still be quite a bit more accurate, and the Seigenthaler incident is a reminder of this fact.  And true, the Essjay Controversy is proof that such endeavors are not perfect.  Those are not my objections.

Peer production begs the question of peer consumption.  Is it not unreasonable to venture that peers—even if loosely defined—are consuming those things that their peers produced?  Perhaps this is a bit of a stretch.  Our peer networks do serve great functions, but relinquishing the asymmetrical allocation of power that characterized the institutional foundation of property also has consequences.  That power, traditionally reserved for the owner, itself performed a valuable service in the same way that information (Yelp, what place has good food?  Is the service good?) embedded within networks and their collaborative webs do.  The absence of those distributed webs allowed those wielding ownership (power) a sense of authority, validity, and legitimacy.  The centrality of the information economy served a purpose in the same way the decentralized economy does, but they have different consequences, which are already materializing and are most sinister when we think about our source of information.

Not to get too meta (as this can apply to Facebook itself, not just to the use of Facebook), but don’t tell me you haven’t ever logged onto Facebook at the end of a long day, only to realize two hours later that you hadn’t read the news that morning and just spent a ton of time (during which you meant to do homework) reading a random assortment of articles that your Facebook friends happened to upload.  A lot of people joke about getting their news from Facebook, and in many ways, that appears undesirable.

“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.”  -Mark Zuckerberg

Wait, what?!

Conservapedia, a conservative spin-off of Wikipedia, was founded in 2006 in response to Wikipedia’s alleged “liberal bias.”  The main page links to other pages including Why does science work at all?, Is science a game?, and The rules of the game.  The website claims that global warming is a liberal hoax and that homosexuality is caused, among other things, by liberal ideology creeping into education and by “psychological tactics used by homosexual activists.”  In all seriousness, propoganda has always existed, and it will always exist.  I just fear that, although peer production confers benefits that enhance all of our lives, peer production may also facilitate the degradation of a robust and transparent information economy, especially as we consume the products of peer production in an increasingly personalized internet age.  I’d guess that the primary consumers of Conservapedia are “peers” of its producers.  No one else would consult it seriously.  Peer production may beget peer consumption, and to the extent that we allow it to supplant our high quality sources of information, they are potentially damaging.

“It will be very hard for people to watch or consume something that has not been tailored to them.”  -Eric Schmidt, Executive Chairman of Google

Lolkatz – by “David C – YLT2012”

I had heard some people talking about some Facebook “chain-letter” type status that was going around, but it took a few more weeks before I saw this on my news feed one morning.

Is this real life?

This post made me feel a lot of things- anger, despair, and a deep sadness for my friend’s lack of understanding of how the world works. I could have just told him how wrong he was, but I applied the Munroe test and decided against it.

This type of posting reveals a sort of paradox in our culture today. Brad always says that in the future, everyone will have 15 minutes of privacy, as the evolution of this quote. The introduction of the Internet and social media sites like Facebook and Twitter make it easier to publish everything. And when I say everything, I mean everything.

Srsly

TMI

So in a world where we are dishing our lives on Facebook and Twitter, from vague statements to admitting to crimes, can we really expect any privacy? Apparently my friend thinks he does.

This expectation might have a few bones to pick with Facebook’s Terms and Agreements, but I think that there’s an interesting question being put forth. Facebook is not necessarily the best option for privacy, but there are alternatives (even to Google now!) that offer a little more privacy. In the future, one of these types of sites may be more appealing/ dominant in the social media world (please?) and it might be more reasonable to expect privacy. This could be a way to protect people under the test formed from Katz v. United States, where Katz made a private telephone call in a phone booth (back in 1964) and was recorded by a bug placed on the ceiling of said booth. Naturally he sued, since it was an unwarranted search, as Katz should be able to expect privacy when entering a glass booth that blocks people from hearing him.

Based on a true story
The movie interpretation of Katz v. US

As such, we now have a test, where if the person expected privacy (subjectively) and society accepts this expectation (objectively), then any search in that situation would need a warrant. So in theory, if there was a massive shift in thinking in our society, we could make the Internet a place where we expect privacy and it illegal for the government to access my sexy Spring Break photos.

Obviously there are some problems with this idea. The terms and agreements of these sites not being conducive to privacy, and therefore allow things like our e-mail to be open to search. However there is the bigger issue of the Internet being an open space where people can post and host content that is available everywhere. We have to start putting up walls before we can begin to expect privacy online. Whether that’s closed groups and message boards (with a host willing to give you privacy) or some other method, it seems as though if this type of Internet becomes popularized, we can finally expect privacy online.

Except not really.

 

So even if we construct these walls, it seems as though California v. Ciraolo sets some weird precedents. There are ways to get around walls (or in this case, fences) that might not be obvious to us at first glance but are pretty innocuous and won’t usually invade on our privacy (like a freaking airplane passenger) but, in my opinion, we manipulated to create an unlawful search. While I don’t have an immediate example, it seems as though this could be brought up as a precedent to protect some roundabout way to illegally search citizens on the Internet.

"Hey, does that look like weed to you?"

So where does that leave us? Katz’s law allows society to change and come to accept different norms of what is supposed to be private. That is why the test is so powerful and useful. At the same time, we have very strange precedents on what is a legal search that still have to be tested in the cyber arena. Even though the Internet isn’t a private place now, but there is the potential and this issue of privacy is still being worked out and has yet to come to a head. But when it does, I think that both Katz’s rule and the Ciraolo ruling will be very important to remember.

4th Amendment v 2.0 – by “Nimisha G – YLT2012”

The Fourth Amendment in all its glory

The fourth amendment guarantees our right to be free from “unreasonable searches and seizures”  in our “houses, papers, and effects.” Basically, if we have a general right of privacy in a particular space, we are entitled to protection under the fourth amendment. So the police can’t just barge in on me dancing in the comfort of my home without a warrant. Thank goodness for that.

Fortunately, the fourth amendment has been around for a while, so courts have been able to iron out a lot of the nitty gritty. In Katz v U.S. (1967), the court defined a search as having occurred if someone has an expectation of privacy regarding the thing being searched and if it is found that that expectation of privacy is reasonable. Now what in the world does reasonable mean? In Katz, federal agents recorded a conversation of Charles Katz making illegal gambling wagers from a telephone booth. The court ruled that Katz had a reasonable expectation of privacy in the telephone booth since he closed the door of the telephone booth. By closing the door, Katz made clear his expectation of privacy; thus even when the recording devices the federal agents used were attached to the outside of the booth, the agents infringed upon Katz’s fourth amendment rights.

Reasonable Expectation of Privacy in the Digital World

Of course, the definition for what is considered a “reasonable expectation of privacy” is changing and depends largely on the circumstances. In Katz, Katz’s expectation of privacy in the telephone booth was reasonable given the physical boundaries Katz set by closing the door. But in the digital world, it is much more difficult to close the door.

Behind the closed door

Fortunately, we have a reasonable expectation of privacy on our computers, cell phones, and other digital devices. For example, as described in this recently published article on Arstechnica.com, police seizure of text messages on a cell phone was unconstitutional since the police infringed upon the cell phone owner’s expectation of privacy by conducting a warrantless search. As the court in State v. Smith (2009) ruled, we are actually entitled to a higher expectation of privacy with cell phones than closed containers or address books due to cell phones’ “ability to store large amounts of private data.” Courts are thus attempting to measure reasonable expectation of privacy on digital devices by comparing them to their physical analogs.

But as we begin to be able to store more and more information in tinier spaces, even searches of limited parts of a particular device have higher probabilities of invading our privacy. Content that might not have been included in a search is, in essence, easier to, unintentionally, ‘discover.’

Luckily, courts have already touched upon this issue. In U.S. v. Walsher (2001), the court found that “[b]ecause computes can hold so much information touching on many different areas of a person’s life, there is greater potential for the ‘intermingling’ of documents and a consequent invasion of privacy when police execute a search for evidence on a computer.” So since computers and cell phones, as well as other devices, are able to hold so much information it is more difficult to search them without invading privacy, even with warrants.

But courts have yet to strike a balance between reasonable expectation of privacy in digital devices and the rights to search and seizure granted to law enforcement through warrants.

Third Party Doctrine

Okay, so we have a reasonable expectation of privacy with physical devices. Woohoo!

The same must apply to whatever we have online, like our emails, facebook conversations, etc. right?

Wrong.

....but it must be done!

According to the third party doctrine, also known as the Stored Communications Act we lose our expectation of privacy as soon as we share something with a third party. This includes Gmail, Facebook, Yahoo, and other online service provides. These companies, by law, can turn over your information to the government. Granted, the government can’t go on a fishing expedition in your email account without due reason but can ask to have access to something more specific, like a particular email.

And the same applies to facebook.

the death of online privacy

By exchanging information over the internet, you are essentially making the choice that you have lost your ‘reasonable expectation of privacy,’ as far as the government is concerned. (So private companies cannot get your information from internet service providers the same way that the government can)

So basically, the internet is essentially a public forum with regards to your fourth amendment rights.

We could all learn a thing or two from them

The Future of the 4th Amendment

So as we grow more dependent on technology to store our personal information, the definition for a person’s reasonable expectation for privacy is more and more up in the air. The law hasn’t fully adapted to technological trends, but when it does, will we see the emergence of a 4th amendment for the digital age?

But hopefully not.

 

 

The Miller Test and the Value of Obscene Speech – by “Zachary B – YLT2012”

The Miller Test for Obscenity, and Strict Scrutiny

Obscene speech holds an unusual place in constitutional law. It is the only speech that the Court hasn’t granted First Amendment protections that does not harm individuals (1). Typically, restrictions on speech must pass strict scrutiny, which requires the government to prove a compelling governmental interest (such as the protection of national security) in the regulation of speech, but there is no such high bar for bans on obscene speech. Rather, the legislature is allowed to ban obscene speech simply because doing so protects “a social interest in order and morality” (2). However, the power to regulate speech must only be used against speech which complies with the Miller Test for obscenity established in Miller v. California. As defined in that case, obscene speech is speech that meets a particular three-prong standard. That standard is:

  1. The average person, applying contemporary community standards would find that such speech, taken as a whole, appeals to the prurient interest (i.e. a morbid or degrading interest in sexual activity, as opposed to simply a curious interest).
  2. The speech depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law.
  3. The speech, taken as a whole, lacks serious literary, artistic, political, or scientific value.
(WARNING: Con Law Terms Appear!)
I want to question the unique position of obscene speech in American law. My position is not that obscene speech should never be banned, but that it should be entitled to the same protection as most other forms of expression under the First Amendment. Speech that is protected under the First Amendment cannot typically be banned on account of its content unless the government can meet the difficult standard of strict scrutiny, which is the standard usually used when the government is restricting the exercise of a fundamental right. That standard states that the government cannot restrict or ban speech based purely on its content unless:
  1. The restriction or ban is justified by a compelling governmental interest.
  2. The law is narrowly tailored to achieve that interest (i.e. the law is not overbroad).
  3. The law is the least restrictive means capable of achieving that interest (i.e. there is no  law that would achieve the interest which would allow for more rights to be protected)
Without getting into the nitty-gritty details between the comparative applications of each law, the precise definitions of particular terms, and the history of how each have been used, it isn’t easy to see exactly what the differences between the two are. Fortunately, there’s an intuitive explanation…
How Scalia Looks While Applying the Miller Test           … While Applying Strict Scrutiny

 

 

Strict scrutiny is much harder to pass, which is why bans on hate speech, requirements for Boy Scouts to include gays, and quota-based affirmative action have all been deemed unconstitutional. Some legal scholars say that strict scrutiny is ‘strict in theory, fatal in fact’ because of how hard it is to pass. However, I think that the main reasons for the use of strict scrutiny over the Miller test are actually not grounded in how much more difficult it is to pass, but problems with the idea of obscenity as a separate form of speech, and the special rules which are applied in the Miller test.

Problem 1:  What are Contemporary Community Standards?

Unlike in 1973, it is now quite easy to broadcast speech over large areas using few resources (I’m doing so now!). If I were to post an obscene picture like the one below, what community standards would regulate it?

Just kidding, folks

The community to which I belong and to which I’m addressing my message is quite different from the community of individuals who could possibly view it. This issue became clear in United States v. Extreme Associates (3), when a porn producer in California was charged with violating obscenity laws in Pennsylvania because officers were able to download it in that area. The court subsequently applied the community standards they were familiar with. This standard, if upheld, would lead to the law being applied with the community standards of the most conservative areas of the nation, because police departments and prosecutors could selectively apply the community standards of a particular area at a national level since content providers can’t easily stop their broadcasts at state lines.

Problem 2: What is ‘Value’, and Why Are We OK with Courts Deciding It?

The central concept enshrined in the third prong of the Miller test is that speech would not be obscene if it has some value. This conception of free speech is quite dangerous, because it contains within it an instrumental view of speech. Rather than seeing speech, even speech that is objectionable, as being inherently valuable, it views speech as just being instrumentally valuable to other goals, like  literature, the arts, politics, or the sciences. Speech certainly is valuable to those endeavors in many cases, but even when it is not, it still serves social values. If speech that is considered obscene was entirely valueless, why would people produce it, own it, or view it? The response of the court is that those people only view it because of a prurient interest. Here is the philosophical issue though, if the courts or the legislature has the power to decide what speech is valuable and what speech is not, rather than citizens, why aren’t larger bans on ‘value-less’ speech permissible? The Miller test opens the door for the government to deem speech value-less and then ban it because it upsets some people. Finally, this is simply inconsistent with the court’s existing philosophy on speech: What value does hate speech have? Why then, given that it is value-less and upsetting to some, is it protected?

Problem 3: What is Obscenity? Problems with Vagueness

The overarching problem with the Miller test, however, lies in the debate about value and community standards. When the laws regarding obscenity are unclear, and the courts are unclear about whether or not the laws are constitutional, legitimate speech is chilled. Content producers will be hesitant to publish works which have only questionable value, because they want to avoid litigation. They will ensure that there works are consistent with the community standards of decency established in the most conservative areas of the country, rather than their intended audience or the nation as a whole, because they could be prosecuted, and even jailed, by courts in those conservative districts. The ultimate result of the Miller test is the elimination of not only obscenity, but much legitimate speech.

The Application of Strict Scrutiny

The application of strict scrutiny to these issues would solve these problems. Strict scrutiny requires that laws be specific and narrowly tailored so that legitimate speech is curtailed as little as possible. It also requires that true compelling interests be used to justify restrictions on speech. In some cases, such interests may exist to prevent actual harm to others, but a presumption that such an interest is always present with speech that is offensive or disturbing to some is not justified by constitutional theory or even basic reflection. Finally, strict scrutiny would require that laws be least-restrictive, giving content producers enough leeway to continue operating with borderline speech. Ultimately,  there is no compelling reason to treat obscene speech as different from any other kind of speech when it comes to the constitutionality of restrictions or bans, so why should it hold such a unique place in constitutional law.

References and Further Readings

  1. Freedom of Speech and Press: Exceptions to the First Amendment, by Henry Cohen
  2. Roth v. United States, Majority Opinion, by Justice William Brennan
  3. United States v. Extreme Associates

 

Please Don’t Kick Gingers – by “Jennifer S – YLT2012”

As a girl who some may call “ginger,” with hair that looks occasionally red in certain lighting, I vividly remember the origins of “Kick a Ginger Day.” Thankfully no one kicked me, likely because of what I like to call the Gendered Ginger Effect (see Exhibit 1).
Exhibit 1: Gendered Ginger

The anti-ginger sentiment can be traced to the 136th episode of my favorite television show of all time, South Park. Aired on November 9, 2005, the episode, titled simply “Ginger Kids,” depicts Eric Cartman leading a Nazi-like campaign against pale, freckly redheaded kids, arguing that gingers are inhuman and soulless.

Exhibit 2: The Gingers of South Park

South Park‘s devoted fan base typically ranges from high minded intellectuals laughing at social commentary (I like to think this is me) to mindless high schoolers laughing at fart jokes. Interestingly, as a precocious 14-year-old, I thought the episode was about how a charismatic leader with violent ambitions can influence the masses. Instead,  the more juvenile fan base seriously latched onto the anti-ginger frenzy. Over the years this spawned a series of Facebook event pages, leading to the naming of November 20, 2008 as “National Kick A Ginger Day.” By that date, more than 5,000 people internationally had joined “Kick a Ginger Day.”

The participation in “Kick a Ginger Day” didn’t end on a Facebook wall. On November 20, 2008, redheaded kids in the United States, Canada, and the United Kingdom faced taunts, torments, and violence from their fellow classmates at school. One high school in Vancouver suspended 20 kids for beating up their ginger classmates. School administrators were baffled–how could an online group spur such violence? Did assaults on gingers, driven by “Kick a Ginger Day,” amount to hate crimes?

The “Kick a Ginger Day” event was created by a 14-year-old Canadian boy who faced an investigation for provoking what officials believed to be a rash of hate crimes across national boundaries. The boy has since apologized, and the page is no longer on Facebook. Ginger kicking did not end in 2008, however, and assaults have continued annually on November 20th. An entire website, www.kick-a-ginger-day.com, is devoted to the ongoing “celebration” of kicking gingers.

A now-infamous 2010 YouTube video, posted by a frustrated redheaded boy, further fueled online ginger hatred.

Exhibit 3: Gingers Do Have Souls

Ginger-bashing online escalated into quite literal ginger-bashing in the real world. What started as a comical commentary on racism in the fictional world of South Park evolved into the beating and bullying of real high school kids through the dissemination of hatred via social media. This is not quite cyber-bullying: these ginger kids were facing the brunt of the beatings in school hallways, not in online messages. This would be difficult to bring to trial as an instance of defamation, since no particular redheaded individuals were targeted in the online speech against gingers. This is, however, certainly hate speech: it legitimizes violence against a group of people. Free speech online does not equate to the free battery of ginger kids.

 

What has been seen, cannot be unseen – by “Christopher Z – YLT2012”

Our society is one that prides itself (and rightly so) for its free speech.  We do, however, apply certain restrictions on that free speech.  Some are quite obvious.  Everyone knows for example that they can’t indiscriminately yell fire in a crowded theater.  Doing so is obviously dangerous to all those involved unless there actually is a flaming inferno for people to run away from.

in a crowded theater... with no flaming inferno...
in a crowded theater... with no flaming inferno...

On the internet, however, there are no crowded theaters.  I can jump from crowded website to crowded website yelling the metaphorical fire without recklessly putting anyone in physical danger of death or injury by stampede.  For example, say there was this new start up website FakeStartupSite.com.  Say I had a grudge against the creator of this website, so I go on to reddit (or digg or wherever you prefer) and make a post saying that FakeStartupSite.com gives any visitor such a nasty computer virus that their whole machine crashes and leaves them stuck watching the nyan cat video on loop over and over again.

btw it was zombie themed when this posted.  Didn't know they changed themes...
Go ahead. Try it

 

This is a post I know to be a complete lie but assuming others believe it, this would hurt or even ruin FakeStartupSite.com’s business.  I have claimed something which I know to be factually incorrect with the intent of causing harm.  The legal term for this is libel.

Now this situation isn’t new to the internet.  Libel can take many forms.  There are other forms of unprotected speech that seem to be even more pertinent to the internet than libel.  Police watch for the spread of things like child pornography and classified information across the web.  Hate speech and threats are also unprotected speech.

Back in the day when these kinds of statements could be easily distributed in newspapers, reparations were easy.  A newspaper doesn’t publish its material without reviewing it first.  The New York Times then is responsible for the contents of its paper.

But for a website like reddit to review all of its posts before the allow them to go online would stifle the whole appeal of the site.  The internet allows for user interactivity in a way that just wasn’t possible before its invention.  Its just silly to expect a website to be able to filter out all of the bad material.  And to inhibit the ability of sites to allow user interaction would be squandering one of the greatest capabilities of the internet

Our justice system, however, decided in Stratton Oakmont, Inc. v. Prodigy Services Co. that if a site performed any sort of editing at all, then it could be held liable for any unprotected material that makes it through to the site because they in esscence became a publisher.  Thinking that this would cause sites to implement even tougher censorship restrictions, this actually would encourage sites to remove all censorship in order to avoid being liable

As anyone who has ever been to completely unrestricted sites can attest, there can be a real benefit to having a censored site.  Facebook would be a much different place if Facebook didn’t keep some of the more disturbing parts of the web from its pages.

Congress then passed the Communications Decency Act and in Section 230 they state that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene…”  This put in place a new rule that allowed sites to create good faith censorship of obscene materials without them being liable for anything that falls through the cracks.

And its a good thing Section 230 is still around.  The internet would be quite a different place.

I can call administrators dumb (over the internet) if I want to – by “Paul C – YLT2012”

*Picture used for educational purposes only, no copyright infringement intended.

Regardless of whether or not I actually believe the Dean of Yale College to be an idiot, I am allowed to say this. If Mary Miller or Yale University decided to take punitive action against me because of this image macro, I could sue them into oblivion.

Freedom of speech has been hammered into our heads since elementary school, so we instinctively know that statements like these are considered opinions, and we are entitled to them.


Of course, we also accept that there are some restrictions–we can’t just go around yelling “fire” in a theater, for example. We also can’t publish bad things about people that aren’t even true in an attempt to defame them. That’s libel.

All of this is pretty basic. Now let’s put this in the context of the internet: If I used this blog entry to libel Someone Important, could that person sue the owner of this blog, or even the ISP of the server on which it’s hosted, for libel? Common sense tells us of course not. So does Professor Balkin.

The difference between suing a newspaper for libel and not suing a blogging service is that the former necessarily reviews its content before publication. With most online services, the provider isn’t really involved with the content. That’s how net neutrality, which is most often thought of as a market issue, becomes an issue of freedom of expression: The internet becomes a medium through which to exercise your right to freedom of speech, so regulating it wouldn’t please the old hats that wrote the Constitution.

So if you value your right to call whoever you want whatever you want online, do your part to support net neutrality. Business arguments may hold some sway, as ISPs are private, but once we take that first step, it’s a slippery way down.

Dr. Horrible’s Horrible Blog: Act IV – by “Hunter F – YLT2012”

Ahaaa ha ha-ah ha ha. Ha haaa ah ha! Ha ha haha ha. Ahem.

So, you know, my evil laugh has come a long way now that I’ve made it into the Evil League of Evil, as you can probably hear. I mean, you can’t actually hear it, since I can’t record it. But from the uh, transcription up there you should be able to imagine just how horrible it’s become.

And before people start sending emails saying that I’m just trying to hide my laugh, for whatever reason, let’s be clear: I can’t record it because Moist broke my webcam, not because I’m ashamed of the laugh or because it isn’t horrible. Because it most definitely is horrible. Horrifyingly… horrible. You’ll just have to trust me here.

Ah ha-ah ha ha!

SO I know it’s been a long time since my last video blog (4 years?), and I know I haven’t been able to respond to my fanmail either. But now, the wait is over. Dr. Horrible is back. Tell you friends. Anyway, ever since my famously successful victory over Captain Hammer, I’ve been working hard on a new scheme. Remember the freeze-ray? The one that stops time? With the Evil Genius grant I got from the league, I was able to steal enough Fluffletonium to bring it up to a whole new level. So now it doesn’t just stop time. Now it can change time too – rewind, fast-forward, everything. Which is big. Really big. Figuratively.

That was maybe a year ago. Since then I’ve had to figure out what to do with it. Which, you know, is a lot harder than it sounds. I have a public image now, so, uh, just committing whatever crime I choose doesn’t work anymore. I’ve got to stick to a theme. Which in my case is an evil inventor theme. But I finally did come up with something, a plan that is, and it’s got everything: mayhem, vast sums of money, destroying the status quo… that evil inventor theme, the one I mentioned earlier…. This is it, boys and girls. This is the one to be excited about.  Brace yourselves.

I’m going to become the world’s greatest, most evil inventor, by stealing all the royalties and rights from every major inventor that has ever lived. And I’m going to do it by going back in time and patenting their own inventions before they had the chance. Thomas Edison, Alexander Graham Bell, Samuel Morse… all about to vanish from history. I will become the greatest inventor, ever. Next time you use your phone, or turn on a light bulb, or send someone a, uh, telegraph… you’ll have to pay. Just make those checks out to Dr. Horrible, P.H.D.

So here goes. See you at the aftermath.

PART II: Horrible’s horrible aftermath

Um. Well the plan was not fully successful. I said I was going to steal inventions, and uh,  I had every intention. But when I got there, to the past, things went, well, unsuccessfully. My original plan was to steal all of Benjamin Franklin’s inventions and patent them. Which I did do. Well, tried to do. I went back to the 18th century to steal his designs. I must have gotten the dates wrong because, he was supposed to be in France, not back in the states. Er, colonies. Didn’t realize he’d catch me stealing them, and I didn’t…

Benjamin Franklin threw the liberty bell at my head.

Not to worry though, I’m fine. Only the bell was damaged. Wow he was strong.

RIGHT, anyway I was able to make off with most of his designs and discoveries. So there was that. The US patent office didn’t exist yet, but that’s not a problem for, you know, a time traveler. Benjamin Franklin never patented any of his inventions, so I thought it’d be easy to get the credit myself.  Once I had the patent, anybody who was using a Franklin Stove would have to turn it in, and buy a replacement Horrible Stove from one of my licensed distributors.

I admit the product name could use some work.

But anyway, yeah! ahHHA! Evil mission accomplished…

"Your patent application for the 'Horrible Stove' has been denied..."

Except, turns out it doesn’t work that way. A statutory bar had been put in place to prevent copycats (or time travelers) from obtaining patents on inventions and discoveries already in use. SO… I’d just have to try patenting one of Ben’s less widely used inventions, right? Well, turns out that doesn’t work either. Even if nobody uses an invention, as long as publicly available information exists about it it’s not eligible for patent. Prior art, I think was the… uh, word. Term. Long story short, I came away empty handed.

So my first attempt was not 100% successful. Per se. Let’s hope second time’s the charm.

 

PART III: Second time is not the charm

No success this time either. I, uh..

hmm.

There’s good news and bad news. And the bad news is definitely bigger, so …

I tried to steal Thomas Edison’s patents. I thought the light bulb was a good one to start off with, since everybody uses them. Or did, before CFLs. And at first things went well. This time I checked the dates carefully, so I got to the patent office well ahead of Edison with my application. Annnnd it worked. Sort of. Until Edison contested the patent, claiming that his invention of the light bulb predated my own. I couldn’t believe it. I mean, Edison accusing someone else of stealing his invention? Irony, am I right?  Anyway, the earlier filing date of my application, which, you know, was the factor I had been counting on, only gave me a slight advantage. Apparently, all Edison needed to do to steal the patent back was to prove that he had come up with the idea before I had. Alas, I’m not so easily beaten. Edison should have known that I would fight back. And prevail. Because I have a P.H.D. in horribleness.

Plus, I've got a hand-cannon.

Quickly, I jumped back forty years and created private documentation proving that I had first conceived of the lightbulb decades before Edison. Ha! Dr. Horrible 1, Edison 0. ….At least those should have been the scores. Once again, it turns out that the people who make laws have taken precautions against time-travelers. I think it was Benjamin Franklin who must have warned them. Anyway, earlier conception alone isn’t enough – I was told that I would also have to prove that during the time between my conception and Edison’s conception I had worked with “reasonable diligence,” whatever that means, in the preparation of my patent application.

So I was stuck. I had supposedly conceived of the light bulb decades before Edison. And there was no way I could have worked diligently for FORTY YEARS on my application. I had to leave empty handed. Again.

That was the bad news.

Good news! Right. So I’ve learned a lot from all this, and I’m going to use it. In just a few short hours I’ll be the most renowned and respected inventor you’ve never heard of. Why never? Because, boys and girls, I’m going to be famous in the future. I’ve identified the most valuable invention of the 21st century: the element Wonderflonium, first created in July, 2015.

DO NOT BOUNCE

I’ve already planted evidence in 2015 proving that I, Dr. Horrible, invented this new element myself in June, and that I worked diligently on  preparing my application between June and July of 2015. On the day after the real inventor files his patent, I’ll present my evidence and walk away with the patent. Easy. Like candy from a baby.

 

Right?

 

 

——————————————————————–

(Ripped off of Doctor Horrible’s Sing-Along Blog)