The Coming Obsolescence of Your $200,000 Degree – by “Anthony T”

The tweet pictured above was posted earlier this year by Stanford professor and Google engineer Sebastian Thrun. When not busy heading up development of Google’s self-driving car, Thrun has spent the past few months doing something equally intriguing. Thrun, along with several of his colleagues at Stanford, has been part of an experimental  program in e-learning that could one day bring Stanford quality education to the masses, and may even give your $200,000 degree a run for its money.

A brief history of e-learning

In September 2002, after years of planning, MIT made 32 of its courses available for free online in a pilot of the OpenCourseWare project. Today, OpenCourseWare has grown to include 2,080 courses, 42 of which include full video lectures. In the years following its launch, other institutions, including Yale, were quick to follow OpenCourseWare’s lead, launching similar e-learning initiatives. Apple’s iTunesU now acts as a central discovery and distribution platform for over 350,000 higher education lectures from around the world. The effects of this deluge of higher education material can be divided into two categories, the potential it holds to improve traditional education, and the promise it holds for independent learners.

To get an idea of how the availability of open course ware could improve traditional education, consider the launch of Floating University earlier this year. Charged with sharing the expertise of great minds across institutional boundaries, the debut of Floating University was notable in that three schools,  Yale, Harvard, and Bard, allowed students to enroll in the online course for credit, albeit with supplementary work at their home institutions. With this as a model, it is easy to imagine universities in the near future cross-licensing their courses, allowing new or improved courses to be introduced in video form at schools for a fraction of the cost of developing a course from scratch. Of course, Floating University’s courses are not exclusive to the three institutions named above; for $500 anyone with an Internet connection can purchase access to the video lectures. This raises an important question, however: beyond the joy of learning, what exactly are customers receiving for their money? Or more aptly, what are customers not getting?

The problem with independent learners

While the open course ware movement certainly holds potential for expanding and improving traditional education, a quick glance at its usage statistics reveals that its greatest contribution has and will continue to be in the encouragement of independent learning. As of 2010, 43 percent of Open CourseWare’s users were independent learners. For Open Yale Courses that number was nearly 70 percent. But what exactly are and should these students be receiving? The success of these initiatives makes clear the immense unmet hunger for education in our country and around the world. These independent learners take it upon themselves to expand their education, but at the end of the day, though they may be intellectually enriched, have they really been provided with any tools to better their standing in life? Why should we be rewarded for our intellectual pursuits and not the users of Open Yale Courses when, statistically speaking, it is likely that some of them have mastered the material just as well if not better than many of us have. For as hard as independent learners work, at the end of the day they are left with no proof of their growth. As a society, it is in our best interest to expand our limited view of what constitutes educational achievement. We need to be open to the development and acceptance of new signals and certifications of education. This is no easy task, however, and the problem is even greater for new educational players, such as Khan Academy, that lack ties to the world of traditional education. The educational experiments taking place today in Palo Alto, and across the world via the web, are providing a path towards breaking traditional education’s stranglehold on educational certification.

Palo Alto, Everywhere

Earlier this year, Stanford made an announcement that took the independent learning community by storm. As part of its Stanford Engineering Everywhere initiative, three computer science courses would be made available for free online, with a twist that made OpenCourseWare appear primitive. Led by Professor Sebastian Thrun and Director of Research at Google, Peter Norvig (who literally wrote the book on artificial intelligence), the goal of the project was to “change the world by bringing education to places that can’t be reached today.” Inspired by the work of Khan Academy, the pair wanted to take e-learning beyond simple video lectures. In partnership with Know Labs (a start up founded in part by Thrun) the pair developed video lectures peppered with interactive quizzes that are then worked through by the lecturers. In addition, the courses feature weekly homework assignments, a midterm, and a final, all of which are actually graded and given back to students. Registration immediately shot through the roof, with over 160,000 students registering for Thrun and Norvig’s  “Introduction to Artificial Intelligence” course, and tens of thousands of additional students registering for database and machine learning classes run on the same model.

Stanford’s risky little experiment has already been declared a success by many, including Professor Hal Abelson of MIT who helped to develop the original OpenCourseWare nearly a decade ago. Stanford has already announced the availability of 16 new courses for Spring 2012, including several that expand beyond computer science to fields such as medicine, engineering, and entrepreneurship. Still, there is one element of the experiment far more important than the large registration numbers and impressive technological innovations: students enrolled in the courses receive grades. Not Stanford credit, mind you, but a certificate of achievement showing how the student fared against actual Stanford students enrolled in the course in Palo Alto. This changes everything.

The road ahead

While Thrun’s tweet seems to imply that Stanford may soon package the courses and sell them for actual degrees, a development which would itself be revolutionary, the larger implications of Stanford’s experiment have already been set in motion. By issuing students certificates of achievement that clearly document the students’ relative performance, Stanford has pried open the iron grip that university degrees have held as the sole indicator of successful higher education. In doing so, they have not only paved the way for other universities to offer similar certifications, but for players not tied to traditional education to develop certifications of achievement that hold some level of legitimacy in the public eye. That’s not to say that come graduation, we will need to fear online certificates one-upping the ‘YALE’ at the top of our transcripts. Still, it’s not difficult to imagine a near future in which high evaluations on a suite of Stanford online courses and a handful of interesting side-projects could allow a degree-less individual to gain a technical interview at Google.

Peter Thiel claims that we’re in a higher education bubble, and that the disruption of education is in our best interest. I’m inclined to agree. The expansion of high-quality e-learning will bolster our economy. It will help to spur innovation that will keep us competitive on the world stage. It has the potential to finally level the playing field and to genuinely improve lives. E-learning is not going to significantly devalue our $200,000 educations anytime soon. But I look forward to the day that it does.

Generativity and Freedom – by “bashcraft”

Asimov’s short story, “Profession,” gives a unique take on the effectiveness of education systems and the intrinsic human right of freedom; specifically, the freedom of choice.

 

 

His story describes a world where at age 8 children are given the ability to read and 10 years later they are assigned a set of tapes that will determine their profession—removing any freedom people have regarding their livelihood. In “Profession”, the government champions this system because it protects citizens from being overwhelmed by society, destroyed by themselves, and offers a smooth transition to adulthood. During his education day, the day you are given the knowledge of your profession by the government, a doctor told George, “You could be devoured by a subject and if the physical make-up of your brain makes it more efficient for you to be something else, something else you will be.” In essence, the protagonist’s, George Platen’s, problems with this system offer commentary similar to that of the readings regarding appliancization and generative technologies– with no freedom of choice, will the ability to create, think, and advance as a society slowly whittle away into nothing? Will the “smarter” members of society continue to maintain a monopoly on the preeminent technologies, forcing the rest of the world into a stagnant purgatory?

To further the comparison between the government in “Profession” and the dangers of non-generative technologies produced by vanguard companies like Apple, I point to Tread Lightly When Embracing the Mac App Store by Nicholas Deleon. Deleon warns against Apple’s vendetta against generativity, the capacity of a system to allow room for future changes, on its App Store platform; Asimov warms against the lack of generativity in the educational system that leaves children with a defined skill set and not a higher intellectual capacity. Deleon asserts, “My concern, I suppose, is that an over-reliance on, and an over-veneration of, the Mac App Store could lead to an unnecessary, if not dangerous, homogenization of Mac software.” Before I continue identifying parallels, it is important to highlight that there are those who feel Mac produced software is simply the best and safest out there and that their is nothing wrong with this homogenization. In fact, Steve Jobs celebrated it at the release of the iPhone: “We define everything that is on the phone… You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work anymore.”

This statement struck me as eerily similar to the statement the doctor made to George on his education day— he claimed the government was protecting its citizens and making life easier by assigning professions. While there may be some validity to both Jobs’ and the doctor’s comments, it is by no means worth sacrificing the freedom generativity enables.

 

The Broader Implications of Appliancization – by “Benjamin H”

Appliancization obviously has implications for how we interact with the internet; assuming we use tethered devices, we necessarily become consumers without the ability to generate changes to the technologies we use, and it’s hard to think that some innovation will be lost without the tinkering that has been so typical of the internet thus far. Some think that anxiety is overblown or irrelevant, though. Is it really a problem if we lose the tinkering culture that was in part responsible for the growth of the internet if it means greater security and for many more intuitive functionality? Perhaps the debate about generativity extends beyond technology to more philosophical issues like our economy, and texts like Asimov’s “Profession” are also bound up in even greater anxieties about our government that are no longer relevant. Tim Wu, a media historian, pushed back against Zittrain’s arguments about appliancization by highlighting the broader implications of the debate. Although Zittrain is concerned about technology, what really underlies the debate is issues of how we want to envision our economy:

The [internet and the media industry] are in the midst of an epic struggle between two competing economic systems that are almost as different as communism and capitalism. The first, old-school scale economics, is behind most of the media empires of the last century, from NBC to Paramount, and it still powers firms such as AT&T and News Corp. It is a model in which bigger is better, products are standardized, and integration is massive. It is the model that ruled American media–and indeed American industry–for most of the twentieth century. Its rival is a model premised on the economics of decentralized innovation and content creation that we associate with the Internet–loosely, the generativity that Zittrain is talking about. (http://www.nationalreview.com/agenda/244145/chris-anderson-appliancization-internet-reihan-salam)

At some level, then, appliancization is a debate about how our economy should function, and it seems like tech companies like Apple are trending more towards traditional media models. Many industries run on a model closer to that of the traditional media industry, and part of the anxiety surrounding appliancization may be simply a desire to preserve the internet’s distinctive culture. But perhaps there’s no reason to think these two systems are mutually exclusive. Html5 and other technologies might allow people to make iPads and other tethered devices generative.

Asimov’s “Profession” is an interesting example of more general concerns about the loss of our freedom of choice and the ability to educate ourselves. If we can’t choose our paths and learn creatively from scratch, perhaps education will become commoditized like any other product. Asimov’s specific concerns aren’t relevant since we can’t, unfortunately, feed ourselves information tapes, but he seems to be concerned more generally with the loss of our freedom of choice and the notion that some higher authority can construct our choices for us. “Profession” was written in 1957, well into the Cold War, and part of his concern seems to stem from the then-rational fear of Soviet-type domination, as some reviewers have noted (http://www.helium.com/items/1994997-profession-by-asimov). As the doctor tells George, “Surely you know that being interested means nothing. You could be devoured by a subject and if the physical make-up of your brain makes it more efficient for you to be something else, something else you will be. You know that, don’t you?”” This kind of concern for the “efficiency” of society destroying the individual is exactly the kind of Cold War stereotype of the Soviet Union that would have been repeated in 1957. Perhaps it’s just me, but the fact that the evil doctor has an Italian name (Antonelli) and the historian’s name, Ingescu, sounds like the famous Romanian dictator Antonescu indicate at least some kind of post-World War II anxiety about the loss of freedom under undemocratic regimes. Even though Zittrain’s argument against appliancization is obviously not a Cold War argument, concerns about loss of freedom are still a similar aversion to any authority impeding with personal choice and action, just like Asimov’s concern. I’m skeptical that these concerns are legitimate today. Even if we can’t maintain generativity with Html5 and other languages, it would seem to me that security concerns are more of a threat to our freedom—from undue invasion of privacy, perhaps—than the threat to our ability to innovate. Moreover, since a new generation of programmers will have to take over the app-making process at some point, the industry has to maintain some way for people to learn how to innovate, even if its only in their established fashion. Perhaps programming will simply become a more specialized field like medicine and will lose amateurs, but security concerns may currently outweigh the benefits of amateur innovation today.

What’s all the fuss about? – by “Dan T”

I am often impatient with arguments that assume we live in a free-market world, because we don’t – people don’t always think or behave rationally, and competition isn’t always perfectly fair.  But in some cases, imagining that the way things are is just a reflection of market forces can be helpful in understanding a situation.  What if, rather than being marketing victims duped by Apple, Google, or any other software company promoting software-as-service, tethered appliances, or locked-down devices (which you might believe to be the case if you read the comments on tech blogs), people use these services because they want them?  Call me crazy, but maybe the shift toward non-generativity stems from the fact that non-generative devices suit people’s needs, or at least do so better than generative ones.

Apparently, the good guys are now the bad guys

I’ll suggest that for the vast majority of the population, that is true.  Grandma doesn’t need a generative PC.  Grandma needs a device that will help her get the job done, even though she has no idea how a computer works.  It needs to be intuitive and safe, so that even when she clicks the wrong button or performs the wrong procedure, nothing truly bad happens.  She doesn’t want to deal with software that is potentially ineffective or malicious.  The fact that both the number of programs she can run and the number of in-program adjustments she can make is greatly limited is probably of no concern to her.  She just wants secure software that gets the basic job done.

I’ll also suggest that where we are now is a fairly normal and expected place on the timeline of a new technology, rather than the splitting-train-tracks situation depicted on the front of Zittrain’s book.

Dramatic, much?

New technology often starts out generative and ends up non-generative.  The automobile, for instance, started out much more generative than it is today – around the turn of the twentieth century, there were hundreds of hundreds of manufacturers tinkering around with their machines.  While it wasn’t quite as generative as the PC or the Internet, the engine was fairly accessible and the automobile owner could make his or her own changes as desired.  That’s still true today, but to a much lesser extent.  Auto technology has improved to such an extent that most car owners would rather take their car to the mechanic than look under the hood themselves, and manufacturers are increasingly finding ways to monopolize repairs and prevent owners from making their own adjustments.

So what happened?  Around the turn of the century, everybody and their mom was trying their hand at making cars.  This led to a lot of innovation, and some problems: breakdowns were frequent and car safety was a novel concept.  The industry started cleaning itself up around 1930 as people became concerned about reliability and safety.  The number of manufacturers greatly decreased and names like Ford and Chrysler came to the forefront.  These days, most people would not even consider buying a car from an amateur.  But there are still isolated tinkerers out there.

Am I crazy for thinking that what’s happening to software is essentially the same thing?  When a new type of technology shows up, it’s often basic enough that anyone can get involved.  Lots of people do, and some of what results is awesome and pushes the frontiers of the technology.  And some of it sucks. This vanguard is not particularly perturbed with the suckiness, but that all changes once the technology is spread to people who aren’t interested in the technology itself, but rather the benefits it might carry for them.  These people don’t want potential.  They want current value.  They are interested in what the technology can do, not what it could do. Obviously we’ve seen this with the PC and with the Internet.

To cater to this population, hackers started making software and hardware for non-hackers.  It was user-friendly and reliable, and not as generative.  In fact, the more user-friendly and reliable the software, the less generative it was.  This is essentially unavoidable.  Most software is only useful if its capabilities are spelled out and finite.  Make it any more generative than that, and it’s easy for the user to get overwhelmed.

Seriously, this is the population we're dealing with here.

But the reason that we shouldn’t get concerned about this shift is that it represents the expansion of software (or any technology) into a new segment of the population rather than a transformation of the technology itself.  The hackers still want generative software and hardware, and they’re still getting it (more on this in a second).  The people who want a reliable experience are getting that.  So everyone wins.  It’s not a zero-sum game, because the software industry is not monolithic.  What is happening now is essentially consumer discrimination – Apple and other giants are writing software for n00bs, and smaller developers (but also to some extent the giants) are writing software for hackers.

Of course, Zittrain gets this (he quotes critic David Post: “a grid of 400 million open PCs is not less generative than a grid of 400 million open PCs and 500 million locked-down TiVos”).  He has two main concerns: first, that through market dominance non-generative software will eventually swallow generative software; and second, that a generative software world is more advantageous than one that is primarily non-generative.

I don’t think the first consideration is as threatening as Zittrain makes it out to be.  The fact is that the software market is very, very large, and there are many, many niches.  Since hackers write software, and hackers are the kinds of people who will generally want generative and unlimited software, you have to construct a pretty convoluted hypothetical in order to get a world where literally all software comes through corporate giants.

The second concern should be taken very seriously though.  How important is creativity?  Is it more important than efficiency?  Is it more important than security?  Is it more important than giving people what they, on some level, want?  These are obviously big questions.

Come on, Ben... tell us how you REALLY feel.

Ultimately, I just can’t side with Zittrain here.  The primary argument for bolstering the generativity of software is that the advantages it provides in innovation outweigh the negatives.  That argument has a lot of merit, but only for some segments of the population.  How much innovation is Grandma realistically going to produce?  Innovations are always driven by a very small portion of a given population: the curious, creative types, the inventors.  Most people just consume and don’t innovate, and that’s okay.  The 400 million open PCs referred to by Post are probably just as generative as the most innovative 25 million-PC subset.  Most of those PCs are, generatively speaking, dead weight.

1% of the computers have 99% of the generativity... sorry Grandma.

So what’s important is that these different groups have different environments.  The inventors want and need a sandbox to play around in, and if given one, they’ll continue to push the frontiers, develop new products, and make cool new toys.  The consumers want and need the toys.  They don’t want the sandbox; they’re not in it for the mucking around.  If you give them one, they’ll get frustrated, they won’t do anything, and they might even get hurt.

Given that, we don’t want a purely generative software environment.  We want a segmented environment in which the software that is generative is incredibly generative – it is essentially unbounded in its capabilities (with all of the associated problems and complications) – and the software not in that category is safe and efficient first, and generative second.  In this environment, people can choose where they belong, depending on the task, and adjust as necessary.  In other words, we want an environment that is roughly the result of market forces.  Most purchasers will look for safe and non-generative software, so most software will be safe and non-generative.  And enough people are interested in generative software to keep that niche healthy and the innovations flowing.

In order to bring about and maintain this kind of environment, generative software needs to be absolutely sacred.  We’re putting all of our innovation eggs in one basket, so in the generative environment needs to allow for as much creativity as possible.  Jailbreaking most devices should be legally protected.  Of course, the developers of those devices can terminate their warranties and withdraw support, but if users want to experiment with their hardware or software, they should have the right to.  You can purchase a tethered device if you want to, but you ought to be able to “de-tether” it at any time (manufacturers can determine whatever non-legal consequences that will entail from their end).

Future law will require humane treatment of anthropomorphic electronics.

While these changes would definitely help, I think we’re doing just fine now, all things considered.  Yes, software, the PC, and the Internet are all becoming less generative for most of the population, but until they become less generative for everyone, I won’t be sweating bullets, and on the other hand, the security that comes with non-generativity may be just what Grandma is looking for.

Sandboxing Generativity – by “Misbah U”

Earlier this month, Apple sent an email out to its developers delaying the implementation of sandboxing to Mac App Store submissions, stating:

The vast majority of Mac users have been free from malware and we’re working on technologies to help keep it that way. As of March 1, 2012 all apps submitted to the Mac App Store must implement sandboxing. Sandboxing your app is a great way to protect systems and users by limiting the resources apps can access and making it more difficult for malicious software to compromise users’ systems.

Simply put, sandboxing serves as a security enhancement. It’s a container  in which an application is allowed to function–unable to perform any tasks or access any resources that would mean transcending the container’s boundaries. Essentially, one application is prevented from affecting another in any malicious way (i.e. its prevented from potentially/theoretically using UNIX commands to delete files on your hard drive without your knowledge, or attempting to extract passwords and share them, etc). I think, conceptually, this sounds fantastic and a great deal for the end user, but there are other consequences that should also be taken into account when it comes to sandboxing and its implementation. Often the case is that applications need a certain level of outside access in order to perform whatever they have advertised to do. For example, a photo editing app, wouldn’t be of much use if it wasn’t even able to access your iPhoto library. To address this, Apple allows for “entitlements” that app developers can request for the application they submit for Mac App Store approval. A quick glance shows that the complete list is fairly straightforward (there are a couple temporary exemptions) …although if you happen to need access to hardware from something other than USB (e.g. Thunderbold, Firewire, or Bluetooth), you’ll be hard-pressed to find a way to do so.

The mobile world has generally always had sandboxing. Apple’s move to apply it to the Mac world seems to imply that engineers thus far have found it such a great idea that they wish to see Mac users benefit from it as well. Yes,  as I said earlier, on the outset sandboxing presents a method for furthering security–unfortunately, as we’ve seen in the jailbreaking community time and time again, code isn’t perfect and it’s only a certain amount of time before someone finds a loophole and continues the constant game of cat and mouse. Just to think of a few areas where sandboxing may fall through:

  1. Apple seems to be  relying on its ability to implement the entitlement limited code perfectly. If Apple could write in perfect code, then it would have, I would think, been able to immediately fix the PDF & TIFF submodules so that they were exploit/bug-free. Unfortunately, code is imperfect.
  2. For apps not sold in the Mac App Store, there’s nothing requiring them to use entitlements. And if there  even was such a requirement, malware could just be distributed in applications with entitlements including basically everything the system could do.
  3. This brings us to the following: Apple has mentioned that it will look closely at apps requiring a lot of entitlements. And again, I can see how that might make sense as it might look suspicious if an app requests sixteen entitlements…but at the same time, looking closely at such applications in some ways can be seen as an admission that entitlements do not work since it leads to what many may consider as code auditing.

To further emphasize the idea that sandboxing is far from full proof: earlier this year, security researcher Charlie Miller submitted an application to the iOS App Store, which was audited and approved. The application allowed Apple’s iOS software to run arbitrary code and therefore download new, unapproved commands onto the device and execute them at will. For example, he demoed the phone being able to vibrate or produce sounds, steal user photos, read contacts, etc. In response to this news, Apple revoked Miller’s developer account.

At this point, I think the real question this comes down to is whether Apple’s latest move has to do primarily with security or Apple’s intent on further exerting control on what’s installed on users’ desktops. To many, sandboxing is seen as the beginning of locked down desktop machines similar to iPhones and iPads, where any development not sanctioned by Apple is effectively diminished.

After Apple sent out the updated timeline for sandbox implementation on applications, Jonathan Zittrain, the author of The Future of the Internet and How to Stop It, tweeted:

Zittrain: Apple one step closer to locking down PCs as predicted in The Future of the Internet at http://t.co/yYhvdrEE ; see http://t.co/e4kmgQWi

What Zittrain is referring to here is the predictions found in The Future of the Internet, where he begins by discussing the concept of generativity–or the openness of a system to new and unplanned changed from its users. Today we have open internet and open computers where you can run any programs and protocols you want and connect to anyone you want. This has been tremendous in establishing what he describes as a generative system. It is important to note, however, that what may seem as a strength can also serve as as weakness for the very same system. Computers can get viruses and crash. They can start to slow down. Thus, the very same force that attracted so many users in the first place can now lead users to go for the safer options available, systems they can be assured will work reliably. Hence Zittrain’s tweet.

One can argue that Apple seems to be closing off the Mac App Store and forcing users to choose between the “open” and “closed” worlds that Zittrain mentions.  Apple justifies sandboxing by appealing to users wanting their Macs to run more smoothly. From their perspective, any users that want to run unsandboxed applications found outside of the Mac App Store, can still do so. Thus,  Macs can function as computers that support safe and unsafe modes–one focusing on increased security, and the other for increased generativity. Because of this, there have been mixed reviews when it comes Apple’s choice to implement sandboxing: some fully support Apple 100%, while others fear the dumbing down of the Mac store.

Furthermore, if you look at this situation of sandboxed applications in the Mac App Store vs. unsandboxed applications found outside of the Store from the developer’s perspective, then I think it’s highly unlikely, that over time, those developers that are at all concerned with market share will choose to create at least two different versions of an app–with one including extra features that the sandboxed version could not have. This therefore brings us back to the concern that sandboxing maybe dumbing down the set of features available to even those who choose not to purchase through Apple.  Hence, at this point, there seems to be a lot of uncertainty surrounding what a sandboxed future means for both the distribution of, and applications themselves.

Overall, Apple’s decision to postpone till March, I think is reason to be somewhat hopeful that developers may play a greater role between now and then in the final implementation of sandboxing; Apple seems to recognize that the concept isn’t exactly ready yet. At the end of the day, however sandboxing is executed, I dont think there is any doubt amongst both developers and users that this is just the beginning of Apple’s overall plan to “IOS-ify” OSX.

Accountability and the Internet – by “William K”

A Slanderous Thing Happened on the Way to the Forum

As we all know, Section 230(3) of the Communications Decency Act of 1996 made it so that websites couldn’t be sued for hosting user comments and submissions in the same way they could if they were hosting their own original material. This made sense at the time for a variety of reasons. A website couldn’t be expected to monitor and filter every incoming post. Additionally, if you found the website accountable for material posted by a third party because the website made some effort to filter out objectionable material, it would discourage websites from making any effort to filter at all. However, what this legislation and the thinking around it failed to predict was the emergence of websites that existed solely to spread gossip and other malicious postings. While there are sites and forums that host a combination of both useful and derogatory messages, there have emerged sites since the original legislation that prosper solely by providing an unfettered forum to such messages. Sites like AutoAdmit, CollegeACB, and others are able to thrive simply because they promote gossip and unsubstantiated rumors.

In Stratton Oakmont, Inc V. Prodigy Services Co.  the court ruled that Prodigy was liable for postings made on the website because it acted in an editorial role by attempting to remove some messages. The controversy caused by this decision lead to the passage of Section 230(c) granting the provider of an internet service immunity if the information was “provided by another information content provider.” This seemed logical because companies like Prodigy weren’t dependent on their reputation as a website for defamatory information or postings. In contrast, sites like AutoAdmit, CollegeACB, and Juicy Campus essentially advertised themselves as a place to post and read defamation. They essentially take Section 230(c) as carte blanche to provide the atmosphere for harmful behavior without providing any of it themselves, thus rendering themselves immune.

 

Reputable Journalism

To me, these sites have a strong parallel to Napster, Kazaa, etc — their entire appeal is derived from the fact that they let users submit and share content that is essentially illegal. Had either Napster or AutoAdmit originally committed the copyright infringement or written the slander instead of merely providing an avenue to do so, it would have been obvious that they should be found legally responsible. However, because they’re simply the conduit by which illegal material is disseminated, a much grayer area emerges. Much like Kazaa, where a vast majority of files were found to be copyrighted and thus illegal to share, gossip sites rely on slanderous rumor and gossip to be successful. They advertise themselves as places where illegal activity is encouraged, but gossip sites still hold immunity because of the provisions of Section 230(c).

An interesting parallel between the two types of sites can be found in the way they evolved. After Napster was shut down, Kazaa sought to fill the void. Kazaa bought ads so that when someone searched for “Napster Replacement” or similar terms, they were brought to the Kazaa download site. In this case, these actions were considered “Inducement to Copyright Infringement.” Somewhat similarly, after JuicyCampus closed, the founders of College ACB made a deal with the founders of JuicyCampus to redirect traffic from Juicy to College. Why is this not considered “Inducement to Defamation” or “Inducement to Libel?” Peter Frank, CollegeACB’s founder, was clearly trying to mimic the success of Matt Ivester, JuicyCampus’s founder.

They Even Pose Kinda Similarly

Obviously, there is a distinction between gossip sites and P2P sharing sites. P2P sites are much easier to prosecute because either something is copyrighted or its not; there’s not the same gray area of whether or not the law has been broken as there is with the harder to define the crime of libel. However, the way that gossip sites solicit, encourage, and depend on defamation draws inarguable parallels with P2P sites and copyright infringement. However, unlike the artists whose music was being pirated, those slandered on gossip sites have no huge corporations looking out for their interests. There’s no Brittan Heller & Heide Iravani Industry Association of America to sue these websites and push for legislation when someone gets called a whore.

 

If This Were Truly Representative of College ACB, Every Figure Would Have a Gun or Their Genitalia Out

 

 Adapting to an Anonymous World

One of the things that makes the internet so susceptible to slander and defamatory remarks is the idea of anonymity. Its obvious that people feel emboldened when they’re online to post things to a public forum that they’d never go around shouting in the dining hall. The end result of this is that there’s a lot of outpouring of hateful and spiteful messages that don’t exist with the same frequency or regularity that they do outside of cyberspace. This breeds a more malicious culture with a more harmful nature.

However, with this heightened tendency to post derogatory and embellished things comes a heightened degree of skepticism from the part of the readers. Just because I read about a girls lascivious nature online doesn’t mean I’m going to believe it. Just like if you heard your school’s big gossip, with a tendency towards making things up, say something about someone doesn’t mean you’d believe them. You learn to take everything on the internet, but especially those posted from anonymous sources, with a Roman legion’s annual salary worth of salt.

Like This, But You Know, Symbolically

 

Attributing Quotes to Anonymous and Unintended Consequences

While posters may be emboldened by their online anonymity, they generally overlook a simple fact: they’re not actually anonymous. Levels can be taken to secure one’s identity when posting online. A combination of technology, proxy servers, and browsers like Tor make online posting significantly more anonymous. But the thing is, even ignoring the fact that these technologies usually won’t make users completely anonymous, most users don’t bother with these precautions anyway.

There’s something unsettling about the internet. Without directly interacting with another human being, without speaking a single syllable or putting a single word on a tangible piece of paper, from the comfort and solitude of one’s own home, we can still spread our opinions or rants to the entire audience of the internet. But people don’t take the time to think about how clicking submit on a website will immediately and permanently put their thoughts or slander out into the world.

Anonymous, Just Ignore the William K in the Top Right

Posters are emboldened to post things they wouldn’t say because posting online doesn’t seem like it could have the same ramifications as saying something or writing something in the real world. However, as the litany of litigation bringing libel charges against “anonymous” users in the past few years has demonstrated, there can still be serious consequences to online postings. These lawsuits make sense, people are committing acts of defamation quite maliciously. If someone makes claims that you cheated on every test or that your accolades were undeserved, potential employers may question your character. This brings up an interesting distinction between real world gossip and that which appears online. Whereas college campus used to be confined to the campus, it is now out there for anyone to see. Someone may post something malicious as part of a petty fight meant solely to be read by the poster’s classmates. But these posts are accessible to anyone — parents, school administrators, and perhaps most worrisome (to Yale kids anyway) future employers. However, as I pointed out in the previous section, people generally don’t things posted on the internet, anonymously, as seriously as they would claims from a credible source. In this way, it would seem that online libel poses a threat, but not to the same extent as defamation in the physical world.

However, the fact that people can be prosecuted for things they post “anonymously” online does bring worries about the extent to which the government can uncover information about a poster. Something posted online doesn’t necessarily reflect a person’s mindset, beliefs, or their intentions. In the case of Heller and Iravani, AutoAdmit posts about raping one of the women probably didn’t indicate any actual intent to commit rape. These posts were disgusting, vile, and clearly unacceptable, but shouldn’t be taken at face value. These posts are certainly threatening, but they don’t constitute a threat in the sense that the poster had the determination or intention of committing the act. Should they be punishable? Probably. Do they deserve the same scrutiny as other threats? Probably not.

It is in the previous example that we see a problem with the government being able to know exactly who posted everything. In the case of Watts v. United States, the Court addressed hyperbolic threats made against President Johnson. In that case, the defendant jokingly made a remark insinuating he would kill the President, after which he looked down the barrel of an imaginary rifle. The comment was met with applause and laughter and didn’t constitute a serious plan to assassinate LBJ. It was within this context that the Court was able to decide that Watts didn’t actually intend to kill the President. The Court explicitly said “taken in context, and regarding the expressly conditional nature of the statement and the reaction of the listeners, we do not see how it could be interpreted otherwise.” The Court ruled that hyperbolic threats against the President needed to be differentiated from legitimate threats. Online, it can be extremely difficult to understand the context of comments and things can easily be taken more or less seriously than they should. The reactions of an online audience will also differ drastically from the reactions of a physical audience. Unlike in the physical world a comment is made to a predetermined audience with a certain, somewhat known mentality, postings online are generally made to a more unknown audience that can actually change and interact with the post after it has been said. Whereas a bunch of nutjobs won’t be able to bust into a student discussion after someone makes a joke about an assassination and then elaborate on a real plan, radical internet users could come across a joking post and then take it in an unintended direction. While federal investigations would hopefully be able to determine the nature of the comments, its an uncharted area. There’s a lot more unknown factors at play on the internet.

Hey Guys. Sorry, You Weren't My Intended Audience.

Let’s Wrap It Up

Essentially, what all of these issues come back to is an issue of accountability. For whatever reason, we tend to think of the internet as a place separate from the physical world. Its the same ridiculous mistake John Perry Barlow made in his A Declaration of the Independence of Cyberspace and it affects us everyday. You can claim that there’s no government to stop us or to censor us, but let’s face it, as long as you still live on this planet, there’s someone who can punish you for what you do. While many of the things that happen online may stay confined to cyberspace, there’s certainly no guarantee of such a thing. Because so much of the stuff that’s said on the internet is so inconsequential, some people might start to think there are no consequences online, but that’s obviously not the case.When you post something, it can be read, it can be discussed, and you, the physical you, not some avatar or username may have to pay the consequences.

Legal Consciousness – by “Preben A”

One of the strange things about the Internet in 2011 is that it merges the wild-west, free expression culture idealized by John Perry Barlow with the professional world, where public image is important.

Moot, founder of 4chan, expressed the importance of anonymity and a safe space to express “wrong” views, without causing harm to society. In contrast, Facebook has attempted to increasingly merge all other websites with Facebook in order to create an online identity for each user. Furthermore, employers often look at a job applicant’s online identity as part of a background check. These websites present the opposite ends of how an individual can interact with others on the Internet and how the individual will be held accountable for those actions. There are many websites that have less anonymity than 4chan yet do not link to an individual’s real name. While these Internet cultures seem to be opposing, the users on websites are not separate. Many Facebook users also use 4chan and there are many sites with similar as these iconic websites. These worlds are not in intense conflict with one another and users often change their behavior based on the website they are on. However, when these cultural norms are broken, and people deviate from the accepted behavior on a given website, the law is often invoked by an individual who feels that his/her rights have been violated.

Most of the anonymity/privacy issues everybody’s talking about boil down to a conflict between the culture of free expression and the professional world. The things we say online and the things people say about us online can both tarnish our reputations in meat-space. As such, this affects anybody who uses the Internet.

However, I think the notion that the Internet is suddenly becoming “srs business,” that real-world laws are increasingly enforceable on the Internet, is highly sensationalized. We are told “Lawsuits against Web Trolls are on the rise,” but what does this mean?

Each individual has a different relationship with the law. Most people (with the exception of legal professionals) rarely come in direct contact with the law, and when they do, it means something is deeply wrong. Individuals interact with the law on a daily basis, but their interactions are based on their conception of the law, not the actual written law. A person’s legal consciousness is based on their conception of what the law can do for them and when it is applicable. This varies immensely amongst individuals. When someone experiences hateful comments on the Internet, the law may be the last thing that comes to their mind. But there are some people who view the law differently — as a tool for control, a way to squeeze personal benefit out of any situation.

These are the people that sue YouTube commenters. Therefore, these widely publicized cases don’t necessarily reflect anything about the Internet itself changing. The cases do set a precedent for future cases but they do not mean that everybody on the Internet is going to start lawsuits. Many people do not have the financial means, time, desire or investment to sue. All that this means is that certain types of people have discovered new opportunities to exercise their power. This is especially true in the case of CyberSLAPPs.

Efforts to “civilize the Internet” through litigation seem impotent for another reason — anonymous comments online carry little weight. If a newspaper ran articles containing the kind of defamatory language blogs and forums habitually use, the potential damage to the victim’s reputation would be much higher. People trust newspapers more. As result, the chaotic nature of the Internet is to an extent self-contained: if spewing violent and hateful speech is the norm in an Internet community, defamatory comments made in that community have little power to damage one’s real reputation.

Therefore, it seems that these cases have perhaps over-sensationalized an interesting phenomenon, which does indicate a change occurring: the Internet is the process of becoming more regulated. However, the extent to which regulation will be successful both in regards to privacy and anonymity as well as issues of piracy among other things is unclear.