Evolving Standards of Creepery – by “Christopher S – YLT2012”

Imagine...

The Internet seems to care less about privacy than it used to. Sure, there’s a minor uproar every time Facebook is called out on releasing some personal data, but the web isn’t the idealistic bastion of anonymity John Perry Barlow declared it as in the 90s. The key phrase here is every time. People make noise for a few days, then go right back to using it.

At first this seems like a lack of interest, but that’s not necessarily what’s going on. It could just as well be a sign that the supposed breach of privacy wasn’t actually a problem, or that it has been fixed. To their credit Facebook has responded well to specific privacy complaints (see this and this). And even if they hadn’t, the information in question wasn’t particularly dangerous – the worst that could happen is that advertisers get your name and some things you’re interested in.

To me the brief uproars show that people do care about privacy, but it hasn’t yet become a real issue. The phrase “if you’re not paying for something, you’re not the customer; you’re the product being sold” contains a grain of truth, but it’s an overly negative way of looking at the situation. It would be more accurate to say that you are paying, just with your personal information rather than cash. To me this is an acceptable arrangement. Advertising is essential to Internet companies, and if they can make more revenue by tailoring ads rather than making them more obnoxious I’m OK with that. As long as people are aware of the transaction, there’s nothing wrong with the Internet working this way.

That people seem to care less than they once did about privacy and other related tech issues isn’t a sign of growing complacency as much as changing attitudes toward technology. There was an issue in the 90s with the “Law of the Horse” on the Internet, the conflict between whether tech issues should be treated as entirely new or if they can be dealt with using existing laws and social norms. We struggle with this today when we complain about online privacy issues even when the Internet gives us more control than we have in the real world (I don’t mean to say that this invalidates the issue; it’s not unreasonable to argue that the Internet should be a place with more privacy).

As a matter of personal preference, I think the Internet should be kept more private than the outside world. I like that ideas can be judged on their own merit without reference to a specific speaker. But as long a privacy policies are clear, there’s nothing ethically wrong going on here. In my view, what has been framed as a legal or ethical issue comes down to what kind of place you think the Internet should be. This is a difficult question, one that I don’t think a lot of people have thought about, but it’s extremely important. That debate might never definitively end (and it shouldn’t), but if we want to answer the privacy question that’s what we need to talk about.

Wikipedia and Network Effects – by “Michael H – YLT2012”

Who among us hasn’t observed a teacher sneer at the thought of a student referencing Wikipedia over traditional, non-digital sources? These teachers laud the immutability and consistency of books over time; however, this great strength of books can also be a significant drawback: they lack the generativity and adaptability to incorporate changing historiographical opinions, cutting-edge scientific research, and innovative discoveries that a more flexible medium provides. Indeed, while a biology textbook from the 1940’s or an NBA records almanac from the 1980’s is certainly “consistent,” each fails to incorporate new information as it becomes available.

Generativity and informational accuracy don’t have to be mutually exclusive though. Indeed, a study by Nature in 2005 found that in a representative set of 42 scientific articles Wikipedia contained 162 factual errors or misleading remarks, while Encyclopedia Britannica contained 123. (1) To realize just how remarkable it is that a website that relies on a decentralized, peer-production process could rival an information source with 100 paid, full-time editors and 4,400 contributors, it is necessary to look at the underlying framework of Wikipedia. (2)

Background

Using money earned from his humble beginnings in “erotic photography,” Jimbo Wales sought to create a free, online encyclopedia. In 2000 he conceived of Nupedia, which in the vein of traditional encyclopedias hired experts to write articles. Over the course of 3 years, Nupedia managed to churn out 25 articles. At this juncture, Jimbo Wales sought relief in Postel’s Law (“Be conservative in what you do; be liberal in what you accept from others”) and created a revamped version of Nupedia called Wikipedia, which allowed the general public to create and edit articles using wiki software. The rest is history. Today, Wikipedia contains 23 million articles, spans 285 languages, and appeals to 365 million readers around the globe. Currently, Wikipedia is the most widely used general reference website with 2.7 billion page views monthly. (3) The triumph of Wikipedia over traditional, pay-for-use encyclopedias can be partly attributed to Gresham’s law, which can summarized colloquially as cheap and convenient drives out expensive and high quality.

Wikipedia Model

Encouragement that the Wikipedia model—a model that relies on the collective wisdom of a large number of unpaid volunteers—could be viable was provided by the NASA ClickWorkers experiment, which ran from November 2000 to September 2001. In the experiment by NASA, unpaid volunteers visited NASA’s website to mark and classify craters and “honeycomb” terrain on Mars. (4) The study produced two surprising and interesting results. First, people are willing to engage in an unpaid, novel, and productive experience merely for the fun of it.  Second, an amalgamation of data contributed by many unskilled volunteers can be virtually indistinguishable from the output of a trained worker. Thus, large groups of people are capable of producing high-quality work for free.

funny gifs

A Counterintuitive Proposition

It seems hard to fathom that a website that allows users cloaked in a veil of anonymity to edit the content of articles could rival the quality of Encyclopedia Britannica.  In an attempt to understand the success of Wikipedia, it is interesting to observe a city in the Netherlands, Drachten. The city has chosen to forgo basic traffic regulations in an attempt to increase safety on the roads. The experiment in Drachten initially has shown promise. Some attribute this to the difference between the effects of rules and standards. While a rule is a regulation that stipulates precise boundaries and is either followed or broken, a standard is more ambiguous and up to interpretation, calling for people to exercise sound judgment. While people might try to circumvent rules that they perceive to be imposed by arbitrary, external forces, they can become more considerate of others when their personal judgment is called upon. As a result, relaxing rules can have the paradoxical effect of causing people to adhere to the desired behavior more closely. (5)

Putting It All Together

So what do NASA and traffic regulations in the Netherlands have to do with Wikipedia, you might ask? These two anecdotes lend credence to the basic assumptions of the Wikipedia model—that the general public is capable of yielding nearly scholarly work with minimal regulation. While the notion of many small contributions forming a remarkable finished product seems strange with respect to encyclopedia articles, consider the analogy of evolution: slight genetic mutations over time in individual agents within a population lead to the betterment of the species as a whole.  A similar model is used in scientific research: major breakthroughs rest on the small contributions of many scientists. While this model may seem strange for information compilation, it is certainly not novel.

The Good, the Bad, and the Ugly

It is unsurprising that many of the flaws that arise concerning Wikipedia are quickly ameliorated; indeed, Wikipedia relies on the procrastination principle—rather than trying to forecast potential problems, it waits for a particular problem to arise and then fixes it. For example, immediately following initial reports of Michael Jackson’s death, “edit wars” ensued on Wikipedia regarding the veracity of these claims. In response to such edit wars, Wikipedia adopted the three-revert rule, which stipulates that an editor should not make the same correction to an article more than three times in one day. Another example of Wikipedia’s remarkable ability to adapt lies in its response to criticism by a former editor-in-chief of Encyclopedia Britannica, Robert McHenry. When McHenry pointed out that Wikipedia failed to note the ambiguity associated with Alexander Hamilton’s birth year, a mistake of which Columbia and Encarta were also guilty, users on Wikipedia corrected the error in under a week, a testament to how dynamic the website can be. These are just a couple of the controversies that Wikipedia has responded to effectively and expediently. (For more see Essjay Controversy and Wikipedia Biography Controversy)

My Take

When passing judgment on Wikipedia, I think it is important for us to view it in its proper context. Wikipedia is not meant to be a compilation of flawlessly written, perfectly crafted articles. When such a high threshold for quality is set for content, a bottleneck ensues, leading to an inability to cover certain relevant topics of interest. The three pillars that make Wikipedia so desirable—it’s free, convenient, and unparalleled in the breadth of its information—necessarily lead to a softening of stringent requirements for content quality and review. (You can’t have your cake and eat it too…) As an anecdote in support of the incredible amount of interconnected information on Wikipedia, consider a game that I’m sure most people are familiar with: given topic X and topic Y, start at topic X on Wikipedia and get to a page about topic Y in Z clicks or less. As an example, starting at Harvard Law School I was able to get to Lady Gaga in 4 clicks. (Harvard Law School-> United States->American music->American pop music-> Lady Gaga. Can you beat me?)

I do not understand Wikipedia “hata’s.” I think it is a losing battle to try to argue that due to a small number of factual errors (3.86 per article as determined by Nature), (1) Wikipedia  is completely without redeeming value. At a bare minimum, I think one must concede that Wikipedia is beneficial for obtaining background information on a topic. To return to my initial anecdote, this rationale should at least preclude a teacher from scoffing at a student who includes Wikipedia in his or her works cited page. (Note that I have almost exclusively adhered to citing Wikipedia articles for this blog post.) If you are personally unsatisfied with the content of Wikipedia articles, you can ignore them entirely, contribute towards improving the articles, or pursue litigation against Wikipedia (although you almost certainly will be unsuccessful…).

Personally, one of my favorite qualities of Wikipedia is that it provides a consistent format across articles that are (at least to a degree) targeted towards the general public. As a student interested in technology and the natural sciences, I often have to read about scientific discoveries that occurred in the last couple of years: frequently, I only have two sources to turn to: the original research paper and Wikipedia (a testament to Wikipedia’s generativity). Bearing in mind the complexity of the topics, I seek to wrap my brain around the concepts by skimming Wikipedia before delving into the highly esoteric research papers. I believe that using Wikipedia in this manner is an appropriate use of the website. While many people possess a take it or leave it mentality when it comes to Wikipedia, I believe that it is important to apply basic common sense and reasoning when deciding whether to use the website—if you can tolerate 3.86 errors in your reading on the Heisenberg Uncertainty Principle, then have it; if not, put your laptop up and embark in the direction of the nearest university library.

(1) http://news.cnet.com/2100-1038_3-5997332.html
(2) http://en.wikipedia.org/wiki/Encyclop%C3%A6dia_Britannica
(3) http://en.wikipedia.org/wiki/Wikipedia
(4) http://en.wikipedia.org/wiki/Clickworkers
(5) http://futureoftheinternet.org/static/ZittrainTheFutureoftheInternet.pdf

 

Odi et Amo: Networked Collaboration – by “Varoon B – YLT2012”

Before the internet was the highly sophisticated, well-structured web of everything that we know it as today, the top ten search results for “Adolf Hitler” returned everything from Hitler’s biography to kitlers, kittens that look like Hitler.  No joke.

As the internet developed, the web—and all the information it contained—was structured.  As the web grew, it became an increasingly attractive resource for people, so they began using the internet.  And then more and more followed suit.  And finally, even those people who used to hate on the internet joined the internet bandwagon.  This phenomenon is described by Metcalf’s Law, named after a brainy hotshot who co-invented the Ethernet (but who also got his PhD at Cambridge Community College).  The idea behind the law is simple.  It basically states that the value of a network increases (really quickly) as the number of users in the network increases.  We can all relate to this trend.  After my friend Florian had to go back to Germany after studying abroad at my high school, he told me to get Skype.  And then my friend George told me that he had Skype, as did my other friend Danielle.  Downloading Skype allowed not only me to contact Florian, but also Florian to contact George and Danielle, and George to contact Florian and me, and Danielle to contact Florian, George, and me, etc.  You get the idea.  The value of the network grows—order n log n or n2as the number of users does.

Before you dismiss this as some esoteric mathematical phenomenon, it might help to remember that this idea is related to a mind-blowing experiment conducted in the Netherlands.  The city of Drachten, with a population of 45,000 people, is verkeersbordvrij—free of all traffic signs.

If you’ve ever been to India and witnessed first-hand the anxiety that drivers there are subjected to in spite of all the road traffic signs, you may wonder what could have possessed anyone to propose something so radical.

But after two years of observing the unusual policy, the city witnessed a considerable decrease in accidents, and other cities in Europe began adopting similar policies. Perhaps surprisingly, the lack of strict, formal laws didn’t result in complete anarchy or dystopia.  The take-home lesson from Dracthen is that sometimes, even in unexpected contexts, standards are more effective than rules; given how networks—whether road maps or social networks—grow so quickly in value, this observation is particularly salient when constructing the frameworks upon which we build networks like the internet.  Instead of feeling burdened with tons of laws to abide by, people can respect each other’s welfare more effectively if they are liberated from them. If people feel like they are part of a social group—they’ve got your back, you’ve got their back—the Internet Gods do their magic, and things just click.

These occurrences are particularly pronounced in peer production (think free software), which consists of three basic steps: producing, accrediting, and distributing content.  NASA Clickworkers, a project that basically distributed and crowd-sourced scientific tasks, demonstrated that the web maintains an altruistic, consensus-esque culture.  So many people were willing to devote their time and energy to things that didn’t directly benefit them (or at least, not monetarily) that together, their combined computing power surpassed that of the world’s fastest supercomputer.  Dang.  (Sidenote: Check out more distributed computing projects here.  Some of them, like RPI’s project to construct a 3-D image of the Milky Way galaxy, are really cool.)

NASA's Clickworkers project asked volunteers (instead of graduate students and scientists) with computers to demarcate craters on Mars.

Next, our peers have also seamlessly integrated the process of establishing relevance and accreditation into our virtual worlds.  I have yet to purchase an item from Amazon without having access to plenty of customer reviews (of both the product and the shipper if I’m buying a used book).  Amazon also includes that handy “customers who bought items you recently viewed also bought these items” bit that always tempts me into buying more stuff.  All of these services are ways of establishing relevance and accreditation.  The “related items” pitch by Amazon teases you with stuff that is almost always relevant or related to the thing you’re searching for or interested in, and all the customer reviews help establish the legitimacy of the product you’re thinking about purchasing.   These services have been integrated into the internet in more subtle ways, too.  Google’s PageRank algorithm (named after Larry Page, FYI) does this.  Pages that are linked to more frequently among more popular sites are prioritized in Google searches.  Thus, these links embedded within sites are a form of establishing relevance and accreditation.  Good websites will be linked to by other good websites more often, thus constructing a kind of peer-to-peer relationship among the sites we find on Google.

The final step of peer production is distribution, which speaks for itself, though it is worth noting that distribution is cheap online.  Together, they all form a powerful combination.  Slashdot, Reddit, and Yelp all do these things in one form or another.  And so does Wikipedia, the king of online peer production.

Needless to say, Wikipedia is pretty darn awesome.  It’s grounded in a spirit of reporting in a neutral point of view, not conducting original research, using verifiable sources, and assuming good faith.  You don’t need me to praise Wikipedia for you to appreciate it.  We’ve all used it, and we will most likely continue to do so.

As a loyal consumer of Wikipedia, I will defend it to great lengths.  I also religiously consult Yelp every time I eat out.  However, I do think there are some drawbacks to commons peer production—or rather, to its potential consequences.  True, even though peer produced projects like Wikipedia have been found to about as inaccurate as Encyclopedia Britannica, it could still be quite a bit more accurate, and the Seigenthaler incident is a reminder of this fact.  And true, the Essjay Controversy is proof that such endeavors are not perfect.  Those are not my objections.

Peer production begs the question of peer consumption.  Is it not unreasonable to venture that peers—even if loosely defined—are consuming those things that their peers produced?  Perhaps this is a bit of a stretch.  Our peer networks do serve great functions, but relinquishing the asymmetrical allocation of power that characterized the institutional foundation of property also has consequences.  That power, traditionally reserved for the owner, itself performed a valuable service in the same way that information (Yelp, what place has good food?  Is the service good?) embedded within networks and their collaborative webs do.  The absence of those distributed webs allowed those wielding ownership (power) a sense of authority, validity, and legitimacy.  The centrality of the information economy served a purpose in the same way the decentralized economy does, but they have different consequences, which are already materializing and are most sinister when we think about our source of information.

Not to get too meta (as this can apply to Facebook itself, not just to the use of Facebook), but don’t tell me you haven’t ever logged onto Facebook at the end of a long day, only to realize two hours later that you hadn’t read the news that morning and just spent a ton of time (during which you meant to do homework) reading a random assortment of articles that your Facebook friends happened to upload.  A lot of people joke about getting their news from Facebook, and in many ways, that appears undesirable.

“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.”  -Mark Zuckerberg

Wait, what?!

Conservapedia, a conservative spin-off of Wikipedia, was founded in 2006 in response to Wikipedia’s alleged “liberal bias.”  The main page links to other pages including Why does science work at all?, Is science a game?, and The rules of the game.  The website claims that global warming is a liberal hoax and that homosexuality is caused, among other things, by liberal ideology creeping into education and by “psychological tactics used by homosexual activists.”  In all seriousness, propoganda has always existed, and it will always exist.  I just fear that, although peer production confers benefits that enhance all of our lives, peer production may also facilitate the degradation of a robust and transparent information economy, especially as we consume the products of peer production in an increasingly personalized internet age.  I’d guess that the primary consumers of Conservapedia are “peers” of its producers.  No one else would consult it seriously.  Peer production may beget peer consumption, and to the extent that we allow it to supplant our high quality sources of information, they are potentially damaging.

“It will be very hard for people to watch or consume something that has not been tailored to them.”  -Eric Schmidt, Executive Chairman of Google

Legal Questions in a Cloudy Future – by “Ric B”

We're all headed for the sky

 

Cloud computing is the future, and it may be here sooner than we think. This past June, Google rolled out the Chromebook, its cloud computing clients pre-installed with ChromeOS. The idea is simple: almost everything we can do on our PCs locally, we could also be doing on the internet; on someone else’s computer. Why not strip away all of the excess, and let our computers be small, sexy, and sleek while the heavy lifting is done on “the cloud”?

 

A Google Chromebook: "Nothing but the Web"
...and a whole host of legal uncertainty

We could start with the fact that well-acquainted internet doomsayer Jonathan Zittrain would blow a gasket over the loss of generativity, as outlined in Chapter X in his “The Future of the Internet”, where X stands for any chapter number in his book. The minute we start letting someone else tell us what we can and cannot do with our computers, we begin to stifle the very innovation that created the Internet as we know it a.k.a. the best thing evar. Is he right? Who knows. This topic has been in beaten to death this course anyway. There are other relevant issues at hand, such as privacy, and I’d like to examine some of the relevant laws and legal questions associated with cloud computing before we plunge headfirst into the future.

 

Privacy

This is the Big Issue. The 4th amendment protects us from “unreasonable searches and seizures”. If we recall from Katz v. United States, one component of what constitutes an unreasonable search is whether or not one has a reasonable expectation of privacy. Should I have a reasonable expectation of privacy with my data on the cloud because a Zoho spreadsheet functions just like the excel one on my personal hard drive, or because I’m hosting it on the internet can I not possibly expect privacy? Enter the Stored Communications Acts, part of the 1986 Electronic Communications Privacy Act.

 

The SCA protects users from warrentless invasions of privacy, or, at least it did in 1986. The SCA stems from a time before the cloud when server space was more expensive, and when all e-mails were downloaded off of the server and onto your hard drive. As such, the SCA  made a distinction between e-mails that were less than 180 days old, and e-mails older than this. An e-mail on the server for 180, it was thought, was thought to be abandoned, and someone could not reasonably expect privacy of their abandoned e-mails. Thus, the government can, under the SCA, freely demand anything off the cloud that older than 180 days. Makes sense 25 years later with cloud computer, when the cloud has replaced users local hard drives, and people use 3rd-party servers for longterm storage of their data, right? Didn’t think so. The good news is, this has been challenged legally, and at least one district court has called the SCA unconstitutional in Warshak v United States. The bad news is, the SCA isn’t the only relevant law at stake…

How the government can do whatever it wants

 

Enter the PATRIOT Act, a new government doctrine which says, in summary, that government can, with regards to getting information, basically do whatever it wants, whenever it wants, regardless of where the the information is stored. That means anything on any cloud is fair game for the government’s eyes. In fact, under the PATRIOT Act, somehow, the US government can get information off a server stored in Europe without a warrant or consent. Whoa. It’s already stopped one major defense firm in the UK, BAE, from adopting Microsoft’s Cloud 365 service, because they are afraid of the US government stealing state secrets off of the cloud, which is something that could happen under the PATRIOT act. Privacy being basically a notion of the past with this law, let’s move on to other legal issues.

 

Net Neutrality

The future of cloud computing is dependent on strong network neutrality laws that are not yet in place. If you are relying on the internet to provide functionality for you computer, and the internet becomes restricted, so does the functionality of your computer. For example, imagine that your ISP begins to put out a web productivity suite designed for use on the cloud. Should they choose to prioritize or filter data away from competitors on your Chromebook, not only does your ISP limit what you can do on the internet, they are now limiting the basic functionality of your computer. The idea that you are free to hack a device that you own to make it do whatever you want doesn’t really apply when the functionality of your product requires the ongoing participation of your ISP.

 

Jurisdiction

As we know, jurisdiction already makes things legally thorny on the internet. At any given time, you could be accessing data owned Australians hosted on Russian servers from your laptop in America, and it wouldn’t be uncommon. Right now, however, if an French website gets taken down for violating French laws, it might be upsetting to you if you like to visit that website. However, if your French cloud computing service, where you hold all of your data, gets taken down for violating French laws, it could mean the loss of all of your data. You may be bound by local laws with regards to what data you could be allowed to store on your cloud, effectively limiting what kind of data documents you can have. For instance, while in America the first amendment gives you every right to deny the Holocaust, you may not be able to store your papers saying so on cloud services in Germany. In fact, the a paper you had been writing, editing, and storing on a German cloud, could suddenly vanish, and you’d have no way of getting it back. Scary.

 

In summary…

The Internet is a complicated landscape legally. Cloud computing has many advantages, like making your data more portable, and allowing your computers to be more powerful. While Google would have you believe that using GoogleDocs is just like using Microsoft Word on your computer, and it may feel that way on the surface, legally the two are worlds apart.

 

...we really, really hope

 

In an interview two years ago, CEO Eric Schmidt was asked the question “People are treating Google like their most trusted friend. Should they be?”. His response? “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Using cloud computing involves not only entering a complicated legal framework, but trusting your 3rd party cloud source, perhaps the way that Hoffa trusted Partin. For the time being, I don’t use GMail, and my programs, e-mail and data are on my personal hard drive. I don’t see that changing any time soon.

Final Project: Defamed (Part 1) – by “John G”

As a suite, we decided to write a rap as an educational piece, lecturing small children about the risks involved in hateful speech and defamatory claims against an individual/others. The introduction begins with a terse explanation of defamation in U.S. law and common defenses in court. Transitioning into the topic of defamation per se, the rap speaks about the difference of defamation per se as compared to regular defamation, specifically, that damages are assumed for defamation per se.

Utilizing celebrity cameos, the rap introduces the four specific instances of defamation per se and continues to provide detailed circumstances under which each could be found applicable or a notable exception. Explicitly, the four categories are allegations or imputations injurious to one’s profession, of criminal activity, of loathsome disease, and of unchastity, which is duly noted in the rap’s chorus.

In addition to the four instances of defamation per se, Internet libel laws are also discussed as a means of exhibiting the relevance of defamation laws in modern culture and technology.

We aptly decided to construct this project as a rap song in order to cast the subject matter of defamation into the medium of aggressive hip-hop, a genre which is often plagued with defamation within its context, thus creating a parody of the genre and of defamation itself – allowing us to discuss and commit speech acts that might otherwise be construed as defamatory.

With much serendipity, we invited many famous artists from the hip-hop industry to spit their game on this track. In a surprising turnout, we were able to have featured performances by The Ying Yang Twins, Chris Ludacris Bridges, Nicki Minaj, Rick Ross, Eminem, T-Pain, Dr. Dre, Jamarius Brahamz, Gangreeeeeeen, and Notorious B.I.G. (posthumously). Unfortunately we could not produce a promotional video due to scheduling conflicts and the fact that one individual is currently deceased. Much to our surprise, our producers have signed a contract for another track to be released in the near future. Follow us on twitter @twitter.com/FratCity.

Here is the link to the song: http://soundcloud.com/defamed-part-1/defamed-part-1-final

Jamar Bromley
Matthew Prewitt
John Greenawalt

Final Project: Search and Seizure Cases Illustrated – by “Will P”

Some of the readings for the search and seizure cases were rather dense, so I made some animations to get across the major points quickly and memorably.  I’ve completed “overviews” for two cases so far and hope to get more in before the final deadline.

Link to YouTube channel:

http://www.youtube.com/user/searchplusseizure?feature=watch

Cases:

Kyllo v. United States

Katz v. United States

 

-Will Pitler

 

The Internet User: More Than Just a Troll – by “Leticia”

The Power of the User

In the past there has been a huge disconnect between an average person on the street and their source of information. Once that gap began to close up when people began producing information on the internet, everyone was immediately cautioned not to believe anything they read unless it was said or written by a verifiable source (read: professionals). How could a random, unnamed person compete with Dr. X, who received their PhD after Y number of years of studying and doing research at University of Y?

In November of 2000, NASA set out to see if this divide was appropriate. Clickworkers was a project that had the public identify and classify the age of craters on Mars images from Viking Orbiter. These images have already been analyzed by the NASA scientists but decided to run this small experiment to test two things: 1) can the public handle this level of science and, 2) does the public want to get involved? Their findings would revolutionize the users role on the internet as just a recipient of knowledge. After just six months of being launched with over 85,000 visitors, NASA analyzed the public’s data and concluded that the task performed by Clickworkers “is virually indistinguishable from the inputs of a geologist with years of experience in identifying Mars craters” (Benkler).

Wait, wait, wait…did NASA just prove that internet users aren’t just out there looking to troll and that the internet is more than just a medium for porn?!! Sure, the average user is clearly not smarter than the space geologists at NASA but clearly there is knowledge in numbers. Internet users, when provided with a platform and easy-to-use tools, are a force to be reckoned with. This small project has now set the wheel in motion for one of the most controversial yet most used tool of our generation.

The Rise of Wikipedia

Jimmy Wales’s lifelong dream was to create an online encyclopedia. He initially set out to make Nupedia the old-fashioned way:

In attempt to lessen the burden on the experts, Wales launched a Nupedia wiki which was opened to the public. Just like in NASA’s Clickworker, what happened next completely shocked everyone involved. Within just a few days of its launch, the Nupedia wiki, or Wikipedia as it was dubbed, outgrew Nupedia. Wales, though initially worried about the validity of an encyclopedia created by the people, he saw the potential and ran with it. And rightfully so…

The Five Pillars of Wikipedia

In order for any egalitarian community to work effectively, there has to be some common grounds. Though the members of the Wikipedia community are essentially strangers to one another, it still functions because everyone agrees to the terms set out by the Five Pillars of Wikipedia:

1. Wikipedia is an online encyclopedia

2. Wikipedia is written from a neutral point of view

3. Wikipedia is free content that anyone can edit, use, modify, and distribute

4. Editors should interact with each other in a respectful and civil manner

5. Wikipedia does not have firm rules

The first three principles aim to ensure that users do not stray from the original intent of allowing Wikipedia to be a comparable of information as professionally created encyclopedias like Britannica while the fourth is there to make sure that these strangers do not sink to chaos and the extreme cruelty that normally results from internet anonymity. The last principle is a beautiful reminder that although there is an original creator of Wikipedia, this is essentially YOUR project as much as the next editor. There are no rules because the people who are editing have good intention. This is information for the people, by the people.

Wikipedia has changed the way in which people interact with information.  For better or for worst, the general public has subconsciously processed these principles and judge what they read based on the expectation one now has of wikipedia editors to not allow for vandalism and faulty information to stay up for long. There is now a standard that one must adhere to when writing and editing Wikipedia articles. If this standard is ignored, Wikipedia users would catch the error and would self-correct within minutes, hours maximum. The general public no longer takes in information as written and demand that at the very least, this standard of credibility and accuracy to be attempted.

 

Is Academia a Thing of the Past?

Time and money on education or minutes on Wikipedia at no cost?

Before giving up hopes and dreams of entering this exclusive ranking, think of the importance of having true professional. True millions of users contributing small amounts of time is cool for the layman, we still need the professionals to provide the primary and secondary sources that are necessary for the accuracy of Wikipedia. Projects like Wikipedia and NASA’s Clickworker still need people who know what they are doing behind the scenes. Rather than putting professionals in opposition of users, we could start of a great collaboration — free and motivated “interns” alongside professionals working together to make the world a more knowledgeable place. In doing so, the spread of knowledge is no longer a one-way street controlled by the elite few.

But regardless of this beautiful image, these fear of taking over potential doom of academia and the professional markets that depended on being information privately owned has created much criticism of this open-sourced encyclopedia. As Robert Henry, a former editor of Encyclopedia Britannica, claims “Wikipedia is unreliable because it is not professionally produced.” Professors are also equally against the growing use of Wikipedia because of the threat it poses:

“Why do professors hate Wikipedia so much?”

Many have spread this notion that since it is user-created that Wikipedia absolutely cannot be accurate. NASA’s Clickworker project showed, as well as the self-correcting system held together by the Five Pillars on Wikipedia, have proven after much analysis, user produced does not mean inaccurate and “shallow source of information.” We have yet to move into the era in which Wikipedia is an acceptable source in academic papers but I have a feeling we are not far from it now that it has become much better at regulating and expanding itself.

The Dangers of Wikipedia?

Dangers of the distribution of knowledge for the people by the people? You must be crazy!!! As wonderful as it is that we now can instantly look up information that is fairly accurate, have we created a generation of people unable to retain information? Are we now so dependent on Wikipedia that we no longer feel the need to commit anything to memory? As this XKCD comic suggest, has it all gotten out of hand? It is still too soon to even begin to look at the effect of Wikipedia on society but these are definitely dangerous scenarios that are not too far out of the question. A little support is good but complete dependency on any one source of information can lead to disastrous outcomes.

 

An extension or a crutch?