The Coming Obsolescence of Your $200,000 Degree – by “Anthony T”

The tweet pictured above was posted earlier this year by Stanford professor and Google engineer Sebastian Thrun. When not busy heading up development of Google’s self-driving car, Thrun has spent the past few months doing something equally intriguing. Thrun, along with several of his colleagues at Stanford, has been part of an experimental  program in e-learning that could one day bring Stanford quality education to the masses, and may even give your $200,000 degree a run for its money.

A brief history of e-learning

In September 2002, after years of planning, MIT made 32 of its courses available for free online in a pilot of the OpenCourseWare project. Today, OpenCourseWare has grown to include 2,080 courses, 42 of which include full video lectures. In the years following its launch, other institutions, including Yale, were quick to follow OpenCourseWare’s lead, launching similar e-learning initiatives. Apple’s iTunesU now acts as a central discovery and distribution platform for over 350,000 higher education lectures from around the world. The effects of this deluge of higher education material can be divided into two categories, the potential it holds to improve traditional education, and the promise it holds for independent learners.

To get an idea of how the availability of open course ware could improve traditional education, consider the launch of Floating University earlier this year. Charged with sharing the expertise of great minds across institutional boundaries, the debut of Floating University was notable in that three schools,  Yale, Harvard, and Bard, allowed students to enroll in the online course for credit, albeit with supplementary work at their home institutions. With this as a model, it is easy to imagine universities in the near future cross-licensing their courses, allowing new or improved courses to be introduced in video form at schools for a fraction of the cost of developing a course from scratch. Of course, Floating University’s courses are not exclusive to the three institutions named above; for $500 anyone with an Internet connection can purchase access to the video lectures. This raises an important question, however: beyond the joy of learning, what exactly are customers receiving for their money? Or more aptly, what are customers not getting?

The problem with independent learners

While the open course ware movement certainly holds potential for expanding and improving traditional education, a quick glance at its usage statistics reveals that its greatest contribution has and will continue to be in the encouragement of independent learning. As of 2010, 43 percent of Open CourseWare’s users were independent learners. For Open Yale Courses that number was nearly 70 percent. But what exactly are and should these students be receiving? The success of these initiatives makes clear the immense unmet hunger for education in our country and around the world. These independent learners take it upon themselves to expand their education, but at the end of the day, though they may be intellectually enriched, have they really been provided with any tools to better their standing in life? Why should we be rewarded for our intellectual pursuits and not the users of Open Yale Courses when, statistically speaking, it is likely that some of them have mastered the material just as well if not better than many of us have. For as hard as independent learners work, at the end of the day they are left with no proof of their growth. As a society, it is in our best interest to expand our limited view of what constitutes educational achievement. We need to be open to the development and acceptance of new signals and certifications of education. This is no easy task, however, and the problem is even greater for new educational players, such as Khan Academy, that lack ties to the world of traditional education. The educational experiments taking place today in Palo Alto, and across the world via the web, are providing a path towards breaking traditional education’s stranglehold on educational certification.

Palo Alto, Everywhere

Earlier this year, Stanford made an announcement that took the independent learning community by storm. As part of its Stanford Engineering Everywhere initiative, three computer science courses would be made available for free online, with a twist that made OpenCourseWare appear primitive. Led by Professor Sebastian Thrun and Director of Research at Google, Peter Norvig (who literally wrote the book on artificial intelligence), the goal of the project was to “change the world by bringing education to places that can’t be reached today.” Inspired by the work of Khan Academy, the pair wanted to take e-learning beyond simple video lectures. In partnership with Know Labs (a start up founded in part by Thrun) the pair developed video lectures peppered with interactive quizzes that are then worked through by the lecturers. In addition, the courses feature weekly homework assignments, a midterm, and a final, all of which are actually graded and given back to students. Registration immediately shot through the roof, with over 160,000 students registering for Thrun and Norvig’s  “Introduction to Artificial Intelligence” course, and tens of thousands of additional students registering for database and machine learning classes run on the same model.

Stanford’s risky little experiment has already been declared a success by many, including Professor Hal Abelson of MIT who helped to develop the original OpenCourseWare nearly a decade ago. Stanford has already announced the availability of 16 new courses for Spring 2012, including several that expand beyond computer science to fields such as medicine, engineering, and entrepreneurship. Still, there is one element of the experiment far more important than the large registration numbers and impressive technological innovations: students enrolled in the courses receive grades. Not Stanford credit, mind you, but a certificate of achievement showing how the student fared against actual Stanford students enrolled in the course in Palo Alto. This changes everything.

The road ahead

While Thrun’s tweet seems to imply that Stanford may soon package the courses and sell them for actual degrees, a development which would itself be revolutionary, the larger implications of Stanford’s experiment have already been set in motion. By issuing students certificates of achievement that clearly document the students’ relative performance, Stanford has pried open the iron grip that university degrees have held as the sole indicator of successful higher education. In doing so, they have not only paved the way for other universities to offer similar certifications, but for players not tied to traditional education to develop certifications of achievement that hold some level of legitimacy in the public eye. That’s not to say that come graduation, we will need to fear online certificates one-upping the ‘YALE’ at the top of our transcripts. Still, it’s not difficult to imagine a near future in which high evaluations on a suite of Stanford online courses and a handful of interesting side-projects could allow a degree-less individual to gain a technical interview at Google.

Peter Thiel claims that we’re in a higher education bubble, and that the disruption of education is in our best interest. I’m inclined to agree. The expansion of high-quality e-learning will bolster our economy. It will help to spur innovation that will keep us competitive on the world stage. It has the potential to finally level the playing field and to genuinely improve lives. E-learning is not going to significantly devalue our $200,000 educations anytime soon. But I look forward to the day that it does.

Generativity and Freedom – by “bashcraft”

Asimov’s short story, “Profession,” gives a unique take on the effectiveness of education systems and the intrinsic human right of freedom; specifically, the freedom of choice.



His story describes a world where at age 8 children are given the ability to read and 10 years later they are assigned a set of tapes that will determine their profession—removing any freedom people have regarding their livelihood. In “Profession”, the government champions this system because it protects citizens from being overwhelmed by society, destroyed by themselves, and offers a smooth transition to adulthood. During his education day, the day you are given the knowledge of your profession by the government, a doctor told George, “You could be devoured by a subject and if the physical make-up of your brain makes it more efficient for you to be something else, something else you will be.” In essence, the protagonist’s, George Platen’s, problems with this system offer commentary similar to that of the readings regarding appliancization and generative technologies– with no freedom of choice, will the ability to create, think, and advance as a society slowly whittle away into nothing? Will the “smarter” members of society continue to maintain a monopoly on the preeminent technologies, forcing the rest of the world into a stagnant purgatory?

To further the comparison between the government in “Profession” and the dangers of non-generative technologies produced by vanguard companies like Apple, I point to Tread Lightly When Embracing the Mac App Store by Nicholas Deleon. Deleon warns against Apple’s vendetta against generativity, the capacity of a system to allow room for future changes, on its App Store platform; Asimov warms against the lack of generativity in the educational system that leaves children with a defined skill set and not a higher intellectual capacity. Deleon asserts, “My concern, I suppose, is that an over-reliance on, and an over-veneration of, the Mac App Store could lead to an unnecessary, if not dangerous, homogenization of Mac software.” Before I continue identifying parallels, it is important to highlight that there are those who feel Mac produced software is simply the best and safest out there and that their is nothing wrong with this homogenization. In fact, Steve Jobs celebrated it at the release of the iPhone: “We define everything that is on the phone… You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work anymore.”

This statement struck me as eerily similar to the statement the doctor made to George on his education day— he claimed the government was protecting its citizens and making life easier by assigning professions. While there may be some validity to both Jobs’ and the doctor’s comments, it is by no means worth sacrificing the freedom generativity enables.


The Broader Implications of Appliancization – by “Benjamin H”

Appliancization obviously has implications for how we interact with the internet; assuming we use tethered devices, we necessarily become consumers without the ability to generate changes to the technologies we use, and it’s hard to think that some innovation will be lost without the tinkering that has been so typical of the internet thus far. Some think that anxiety is overblown or irrelevant, though. Is it really a problem if we lose the tinkering culture that was in part responsible for the growth of the internet if it means greater security and for many more intuitive functionality? Perhaps the debate about generativity extends beyond technology to more philosophical issues like our economy, and texts like Asimov’s “Profession” are also bound up in even greater anxieties about our government that are no longer relevant. Tim Wu, a media historian, pushed back against Zittrain’s arguments about appliancization by highlighting the broader implications of the debate. Although Zittrain is concerned about technology, what really underlies the debate is issues of how we want to envision our economy:

The [internet and the media industry] are in the midst of an epic struggle between two competing economic systems that are almost as different as communism and capitalism. The first, old-school scale economics, is behind most of the media empires of the last century, from NBC to Paramount, and it still powers firms such as AT&T and News Corp. It is a model in which bigger is better, products are standardized, and integration is massive. It is the model that ruled American media–and indeed American industry–for most of the twentieth century. Its rival is a model premised on the economics of decentralized innovation and content creation that we associate with the Internet–loosely, the generativity that Zittrain is talking about. (

At some level, then, appliancization is a debate about how our economy should function, and it seems like tech companies like Apple are trending more towards traditional media models. Many industries run on a model closer to that of the traditional media industry, and part of the anxiety surrounding appliancization may be simply a desire to preserve the internet’s distinctive culture. But perhaps there’s no reason to think these two systems are mutually exclusive. Html5 and other technologies might allow people to make iPads and other tethered devices generative.

Asimov’s “Profession” is an interesting example of more general concerns about the loss of our freedom of choice and the ability to educate ourselves. If we can’t choose our paths and learn creatively from scratch, perhaps education will become commoditized like any other product. Asimov’s specific concerns aren’t relevant since we can’t, unfortunately, feed ourselves information tapes, but he seems to be concerned more generally with the loss of our freedom of choice and the notion that some higher authority can construct our choices for us. “Profession” was written in 1957, well into the Cold War, and part of his concern seems to stem from the then-rational fear of Soviet-type domination, as some reviewers have noted ( As the doctor tells George, “Surely you know that being interested means nothing. You could be devoured by a subject and if the physical make-up of your brain makes it more efficient for you to be something else, something else you will be. You know that, don’t you?”” This kind of concern for the “efficiency” of society destroying the individual is exactly the kind of Cold War stereotype of the Soviet Union that would have been repeated in 1957. Perhaps it’s just me, but the fact that the evil doctor has an Italian name (Antonelli) and the historian’s name, Ingescu, sounds like the famous Romanian dictator Antonescu indicate at least some kind of post-World War II anxiety about the loss of freedom under undemocratic regimes. Even though Zittrain’s argument against appliancization is obviously not a Cold War argument, concerns about loss of freedom are still a similar aversion to any authority impeding with personal choice and action, just like Asimov’s concern. I’m skeptical that these concerns are legitimate today. Even if we can’t maintain generativity with Html5 and other languages, it would seem to me that security concerns are more of a threat to our freedom—from undue invasion of privacy, perhaps—than the threat to our ability to innovate. Moreover, since a new generation of programmers will have to take over the app-making process at some point, the industry has to maintain some way for people to learn how to innovate, even if its only in their established fashion. Perhaps programming will simply become a more specialized field like medicine and will lose amateurs, but security concerns may currently outweigh the benefits of amateur innovation today.

What’s all the fuss about? – by “Dan T”

I am often impatient with arguments that assume we live in a free-market world, because we don’t – people don’t always think or behave rationally, and competition isn’t always perfectly fair.  But in some cases, imagining that the way things are is just a reflection of market forces can be helpful in understanding a situation.  What if, rather than being marketing victims duped by Apple, Google, or any other software company promoting software-as-service, tethered appliances, or locked-down devices (which you might believe to be the case if you read the comments on tech blogs), people use these services because they want them?  Call me crazy, but maybe the shift toward non-generativity stems from the fact that non-generative devices suit people’s needs, or at least do so better than generative ones.

Apparently, the good guys are now the bad guys

I’ll suggest that for the vast majority of the population, that is true.  Grandma doesn’t need a generative PC.  Grandma needs a device that will help her get the job done, even though she has no idea how a computer works.  It needs to be intuitive and safe, so that even when she clicks the wrong button or performs the wrong procedure, nothing truly bad happens.  She doesn’t want to deal with software that is potentially ineffective or malicious.  The fact that both the number of programs she can run and the number of in-program adjustments she can make is greatly limited is probably of no concern to her.  She just wants secure software that gets the basic job done.

I’ll also suggest that where we are now is a fairly normal and expected place on the timeline of a new technology, rather than the splitting-train-tracks situation depicted on the front of Zittrain’s book.

Dramatic, much?

New technology often starts out generative and ends up non-generative.  The automobile, for instance, started out much more generative than it is today – around the turn of the twentieth century, there were hundreds of hundreds of manufacturers tinkering around with their machines.  While it wasn’t quite as generative as the PC or the Internet, the engine was fairly accessible and the automobile owner could make his or her own changes as desired.  That’s still true today, but to a much lesser extent.  Auto technology has improved to such an extent that most car owners would rather take their car to the mechanic than look under the hood themselves, and manufacturers are increasingly finding ways to monopolize repairs and prevent owners from making their own adjustments.

So what happened?  Around the turn of the century, everybody and their mom was trying their hand at making cars.  This led to a lot of innovation, and some problems: breakdowns were frequent and car safety was a novel concept.  The industry started cleaning itself up around 1930 as people became concerned about reliability and safety.  The number of manufacturers greatly decreased and names like Ford and Chrysler came to the forefront.  These days, most people would not even consider buying a car from an amateur.  But there are still isolated tinkerers out there.

Am I crazy for thinking that what’s happening to software is essentially the same thing?  When a new type of technology shows up, it’s often basic enough that anyone can get involved.  Lots of people do, and some of what results is awesome and pushes the frontiers of the technology.  And some of it sucks. This vanguard is not particularly perturbed with the suckiness, but that all changes once the technology is spread to people who aren’t interested in the technology itself, but rather the benefits it might carry for them.  These people don’t want potential.  They want current value.  They are interested in what the technology can do, not what it could do. Obviously we’ve seen this with the PC and with the Internet.

To cater to this population, hackers started making software and hardware for non-hackers.  It was user-friendly and reliable, and not as generative.  In fact, the more user-friendly and reliable the software, the less generative it was.  This is essentially unavoidable.  Most software is only useful if its capabilities are spelled out and finite.  Make it any more generative than that, and it’s easy for the user to get overwhelmed.

Seriously, this is the population we're dealing with here.

But the reason that we shouldn’t get concerned about this shift is that it represents the expansion of software (or any technology) into a new segment of the population rather than a transformation of the technology itself.  The hackers still want generative software and hardware, and they’re still getting it (more on this in a second).  The people who want a reliable experience are getting that.  So everyone wins.  It’s not a zero-sum game, because the software industry is not monolithic.  What is happening now is essentially consumer discrimination – Apple and other giants are writing software for n00bs, and smaller developers (but also to some extent the giants) are writing software for hackers.

Of course, Zittrain gets this (he quotes critic David Post: “a grid of 400 million open PCs is not less generative than a grid of 400 million open PCs and 500 million locked-down TiVos”).  He has two main concerns: first, that through market dominance non-generative software will eventually swallow generative software; and second, that a generative software world is more advantageous than one that is primarily non-generative.

I don’t think the first consideration is as threatening as Zittrain makes it out to be.  The fact is that the software market is very, very large, and there are many, many niches.  Since hackers write software, and hackers are the kinds of people who will generally want generative and unlimited software, you have to construct a pretty convoluted hypothetical in order to get a world where literally all software comes through corporate giants.

The second concern should be taken very seriously though.  How important is creativity?  Is it more important than efficiency?  Is it more important than security?  Is it more important than giving people what they, on some level, want?  These are obviously big questions.

Come on, Ben... tell us how you REALLY feel.

Ultimately, I just can’t side with Zittrain here.  The primary argument for bolstering the generativity of software is that the advantages it provides in innovation outweigh the negatives.  That argument has a lot of merit, but only for some segments of the population.  How much innovation is Grandma realistically going to produce?  Innovations are always driven by a very small portion of a given population: the curious, creative types, the inventors.  Most people just consume and don’t innovate, and that’s okay.  The 400 million open PCs referred to by Post are probably just as generative as the most innovative 25 million-PC subset.  Most of those PCs are, generatively speaking, dead weight.

1% of the computers have 99% of the generativity... sorry Grandma.

So what’s important is that these different groups have different environments.  The inventors want and need a sandbox to play around in, and if given one, they’ll continue to push the frontiers, develop new products, and make cool new toys.  The consumers want and need the toys.  They don’t want the sandbox; they’re not in it for the mucking around.  If you give them one, they’ll get frustrated, they won’t do anything, and they might even get hurt.

Given that, we don’t want a purely generative software environment.  We want a segmented environment in which the software that is generative is incredibly generative – it is essentially unbounded in its capabilities (with all of the associated problems and complications) – and the software not in that category is safe and efficient first, and generative second.  In this environment, people can choose where they belong, depending on the task, and adjust as necessary.  In other words, we want an environment that is roughly the result of market forces.  Most purchasers will look for safe and non-generative software, so most software will be safe and non-generative.  And enough people are interested in generative software to keep that niche healthy and the innovations flowing.

In order to bring about and maintain this kind of environment, generative software needs to be absolutely sacred.  We’re putting all of our innovation eggs in one basket, so in the generative environment needs to allow for as much creativity as possible.  Jailbreaking most devices should be legally protected.  Of course, the developers of those devices can terminate their warranties and withdraw support, but if users want to experiment with their hardware or software, they should have the right to.  You can purchase a tethered device if you want to, but you ought to be able to “de-tether” it at any time (manufacturers can determine whatever non-legal consequences that will entail from their end).

Future law will require humane treatment of anthropomorphic electronics.

While these changes would definitely help, I think we’re doing just fine now, all things considered.  Yes, software, the PC, and the Internet are all becoming less generative for most of the population, but until they become less generative for everyone, I won’t be sweating bullets, and on the other hand, the security that comes with non-generativity may be just what Grandma is looking for.

Sandboxing Generativity – by “Misbah U”

Earlier this month, Apple sent an email out to its developers delaying the implementation of sandboxing to Mac App Store submissions, stating:

The vast majority of Mac users have been free from malware and we’re working on technologies to help keep it that way. As of March 1, 2012 all apps submitted to the Mac App Store must implement sandboxing. Sandboxing your app is a great way to protect systems and users by limiting the resources apps can access and making it more difficult for malicious software to compromise users’ systems.

Simply put, sandboxing serves as a security enhancement. It’s a container  in which an application is allowed to function–unable to perform any tasks or access any resources that would mean transcending the container’s boundaries. Essentially, one application is prevented from affecting another in any malicious way (i.e. its prevented from potentially/theoretically using UNIX commands to delete files on your hard drive without your knowledge, or attempting to extract passwords and share them, etc). I think, conceptually, this sounds fantastic and a great deal for the end user, but there are other consequences that should also be taken into account when it comes to sandboxing and its implementation. Often the case is that applications need a certain level of outside access in order to perform whatever they have advertised to do. For example, a photo editing app, wouldn’t be of much use if it wasn’t even able to access your iPhoto library. To address this, Apple allows for “entitlements” that app developers can request for the application they submit for Mac App Store approval. A quick glance shows that the complete list is fairly straightforward (there are a couple temporary exemptions) …although if you happen to need access to hardware from something other than USB (e.g. Thunderbold, Firewire, or Bluetooth), you’ll be hard-pressed to find a way to do so.

The mobile world has generally always had sandboxing. Apple’s move to apply it to the Mac world seems to imply that engineers thus far have found it such a great idea that they wish to see Mac users benefit from it as well. Yes,  as I said earlier, on the outset sandboxing presents a method for furthering security–unfortunately, as we’ve seen in the jailbreaking community time and time again, code isn’t perfect and it’s only a certain amount of time before someone finds a loophole and continues the constant game of cat and mouse. Just to think of a few areas where sandboxing may fall through:

  1. Apple seems to be  relying on its ability to implement the entitlement limited code perfectly. If Apple could write in perfect code, then it would have, I would think, been able to immediately fix the PDF & TIFF submodules so that they were exploit/bug-free. Unfortunately, code is imperfect.
  2. For apps not sold in the Mac App Store, there’s nothing requiring them to use entitlements. And if there  even was such a requirement, malware could just be distributed in applications with entitlements including basically everything the system could do.
  3. This brings us to the following: Apple has mentioned that it will look closely at apps requiring a lot of entitlements. And again, I can see how that might make sense as it might look suspicious if an app requests sixteen entitlements…but at the same time, looking closely at such applications in some ways can be seen as an admission that entitlements do not work since it leads to what many may consider as code auditing.

To further emphasize the idea that sandboxing is far from full proof: earlier this year, security researcher Charlie Miller submitted an application to the iOS App Store, which was audited and approved. The application allowed Apple’s iOS software to run arbitrary code and therefore download new, unapproved commands onto the device and execute them at will. For example, he demoed the phone being able to vibrate or produce sounds, steal user photos, read contacts, etc. In response to this news, Apple revoked Miller’s developer account.

At this point, I think the real question this comes down to is whether Apple’s latest move has to do primarily with security or Apple’s intent on further exerting control on what’s installed on users’ desktops. To many, sandboxing is seen as the beginning of locked down desktop machines similar to iPhones and iPads, where any development not sanctioned by Apple is effectively diminished.

After Apple sent out the updated timeline for sandbox implementation on applications, Jonathan Zittrain, the author of The Future of the Internet and How to Stop It, tweeted:

Zittrain: Apple one step closer to locking down PCs as predicted in The Future of the Internet at ; see

What Zittrain is referring to here is the predictions found in The Future of the Internet, where he begins by discussing the concept of generativity–or the openness of a system to new and unplanned changed from its users. Today we have open internet and open computers where you can run any programs and protocols you want and connect to anyone you want. This has been tremendous in establishing what he describes as a generative system. It is important to note, however, that what may seem as a strength can also serve as as weakness for the very same system. Computers can get viruses and crash. They can start to slow down. Thus, the very same force that attracted so many users in the first place can now lead users to go for the safer options available, systems they can be assured will work reliably. Hence Zittrain’s tweet.

One can argue that Apple seems to be closing off the Mac App Store and forcing users to choose between the “open” and “closed” worlds that Zittrain mentions.  Apple justifies sandboxing by appealing to users wanting their Macs to run more smoothly. From their perspective, any users that want to run unsandboxed applications found outside of the Mac App Store, can still do so. Thus,  Macs can function as computers that support safe and unsafe modes–one focusing on increased security, and the other for increased generativity. Because of this, there have been mixed reviews when it comes Apple’s choice to implement sandboxing: some fully support Apple 100%, while others fear the dumbing down of the Mac store.

Furthermore, if you look at this situation of sandboxed applications in the Mac App Store vs. unsandboxed applications found outside of the Store from the developer’s perspective, then I think it’s highly unlikely, that over time, those developers that are at all concerned with market share will choose to create at least two different versions of an app–with one including extra features that the sandboxed version could not have. This therefore brings us back to the concern that sandboxing maybe dumbing down the set of features available to even those who choose not to purchase through Apple.  Hence, at this point, there seems to be a lot of uncertainty surrounding what a sandboxed future means for both the distribution of, and applications themselves.

Overall, Apple’s decision to postpone till March, I think is reason to be somewhat hopeful that developers may play a greater role between now and then in the final implementation of sandboxing; Apple seems to recognize that the concept isn’t exactly ready yet. At the end of the day, however sandboxing is executed, I dont think there is any doubt amongst both developers and users that this is just the beginning of Apple’s overall plan to “IOS-ify” OSX.

Accountability and the Internet – by “William K”

A Slanderous Thing Happened on the Way to the Forum

As we all know, Section 230(3) of the Communications Decency Act of 1996 made it so that websites couldn’t be sued for hosting user comments and submissions in the same way they could if they were hosting their own original material. This made sense at the time for a variety of reasons. A website couldn’t be expected to monitor and filter every incoming post. Additionally, if you found the website accountable for material posted by a third party because the website made some effort to filter out objectionable material, it would discourage websites from making any effort to filter at all. However, what this legislation and the thinking around it failed to predict was the emergence of websites that existed solely to spread gossip and other malicious postings. While there are sites and forums that host a combination of both useful and derogatory messages, there have emerged sites since the original legislation that prosper solely by providing an unfettered forum to such messages. Sites like AutoAdmit, CollegeACB, and others are able to thrive simply because they promote gossip and unsubstantiated rumors.

In Stratton Oakmont, Inc V. Prodigy Services Co.  the court ruled that Prodigy was liable for postings made on the website because it acted in an editorial role by attempting to remove some messages. The controversy caused by this decision lead to the passage of Section 230(c) granting the provider of an internet service immunity if the information was “provided by another information content provider.” This seemed logical because companies like Prodigy weren’t dependent on their reputation as a website for defamatory information or postings. In contrast, sites like AutoAdmit, CollegeACB, and Juicy Campus essentially advertised themselves as a place to post and read defamation. They essentially take Section 230(c) as carte blanche to provide the atmosphere for harmful behavior without providing any of it themselves, thus rendering themselves immune.


Reputable Journalism

To me, these sites have a strong parallel to Napster, Kazaa, etc — their entire appeal is derived from the fact that they let users submit and share content that is essentially illegal. Had either Napster or AutoAdmit originally committed the copyright infringement or written the slander instead of merely providing an avenue to do so, it would have been obvious that they should be found legally responsible. However, because they’re simply the conduit by which illegal material is disseminated, a much grayer area emerges. Much like Kazaa, where a vast majority of files were found to be copyrighted and thus illegal to share, gossip sites rely on slanderous rumor and gossip to be successful. They advertise themselves as places where illegal activity is encouraged, but gossip sites still hold immunity because of the provisions of Section 230(c).

An interesting parallel between the two types of sites can be found in the way they evolved. After Napster was shut down, Kazaa sought to fill the void. Kazaa bought ads so that when someone searched for “Napster Replacement” or similar terms, they were brought to the Kazaa download site. In this case, these actions were considered “Inducement to Copyright Infringement.” Somewhat similarly, after JuicyCampus closed, the founders of College ACB made a deal with the founders of JuicyCampus to redirect traffic from Juicy to College. Why is this not considered “Inducement to Defamation” or “Inducement to Libel?” Peter Frank, CollegeACB’s founder, was clearly trying to mimic the success of Matt Ivester, JuicyCampus’s founder.

They Even Pose Kinda Similarly

Obviously, there is a distinction between gossip sites and P2P sharing sites. P2P sites are much easier to prosecute because either something is copyrighted or its not; there’s not the same gray area of whether or not the law has been broken as there is with the harder to define the crime of libel. However, the way that gossip sites solicit, encourage, and depend on defamation draws inarguable parallels with P2P sites and copyright infringement. However, unlike the artists whose music was being pirated, those slandered on gossip sites have no huge corporations looking out for their interests. There’s no Brittan Heller & Heide Iravani Industry Association of America to sue these websites and push for legislation when someone gets called a whore.


If This Were Truly Representative of College ACB, Every Figure Would Have a Gun or Their Genitalia Out


 Adapting to an Anonymous World

One of the things that makes the internet so susceptible to slander and defamatory remarks is the idea of anonymity. Its obvious that people feel emboldened when they’re online to post things to a public forum that they’d never go around shouting in the dining hall. The end result of this is that there’s a lot of outpouring of hateful and spiteful messages that don’t exist with the same frequency or regularity that they do outside of cyberspace. This breeds a more malicious culture with a more harmful nature.

However, with this heightened tendency to post derogatory and embellished things comes a heightened degree of skepticism from the part of the readers. Just because I read about a girls lascivious nature online doesn’t mean I’m going to believe it. Just like if you heard your school’s big gossip, with a tendency towards making things up, say something about someone doesn’t mean you’d believe them. You learn to take everything on the internet, but especially those posted from anonymous sources, with a Roman legion’s annual salary worth of salt.

Like This, But You Know, Symbolically


Attributing Quotes to Anonymous and Unintended Consequences

While posters may be emboldened by their online anonymity, they generally overlook a simple fact: they’re not actually anonymous. Levels can be taken to secure one’s identity when posting online. A combination of technology, proxy servers, and browsers like Tor make online posting significantly more anonymous. But the thing is, even ignoring the fact that these technologies usually won’t make users completely anonymous, most users don’t bother with these precautions anyway.

There’s something unsettling about the internet. Without directly interacting with another human being, without speaking a single syllable or putting a single word on a tangible piece of paper, from the comfort and solitude of one’s own home, we can still spread our opinions or rants to the entire audience of the internet. But people don’t take the time to think about how clicking submit on a website will immediately and permanently put their thoughts or slander out into the world.

Anonymous, Just Ignore the William K in the Top Right

Posters are emboldened to post things they wouldn’t say because posting online doesn’t seem like it could have the same ramifications as saying something or writing something in the real world. However, as the litany of litigation bringing libel charges against “anonymous” users in the past few years has demonstrated, there can still be serious consequences to online postings. These lawsuits make sense, people are committing acts of defamation quite maliciously. If someone makes claims that you cheated on every test or that your accolades were undeserved, potential employers may question your character. This brings up an interesting distinction between real world gossip and that which appears online. Whereas college campus used to be confined to the campus, it is now out there for anyone to see. Someone may post something malicious as part of a petty fight meant solely to be read by the poster’s classmates. But these posts are accessible to anyone — parents, school administrators, and perhaps most worrisome (to Yale kids anyway) future employers. However, as I pointed out in the previous section, people generally don’t things posted on the internet, anonymously, as seriously as they would claims from a credible source. In this way, it would seem that online libel poses a threat, but not to the same extent as defamation in the physical world.

However, the fact that people can be prosecuted for things they post “anonymously” online does bring worries about the extent to which the government can uncover information about a poster. Something posted online doesn’t necessarily reflect a person’s mindset, beliefs, or their intentions. In the case of Heller and Iravani, AutoAdmit posts about raping one of the women probably didn’t indicate any actual intent to commit rape. These posts were disgusting, vile, and clearly unacceptable, but shouldn’t be taken at face value. These posts are certainly threatening, but they don’t constitute a threat in the sense that the poster had the determination or intention of committing the act. Should they be punishable? Probably. Do they deserve the same scrutiny as other threats? Probably not.

It is in the previous example that we see a problem with the government being able to know exactly who posted everything. In the case of Watts v. United States, the Court addressed hyperbolic threats made against President Johnson. In that case, the defendant jokingly made a remark insinuating he would kill the President, after which he looked down the barrel of an imaginary rifle. The comment was met with applause and laughter and didn’t constitute a serious plan to assassinate LBJ. It was within this context that the Court was able to decide that Watts didn’t actually intend to kill the President. The Court explicitly said “taken in context, and regarding the expressly conditional nature of the statement and the reaction of the listeners, we do not see how it could be interpreted otherwise.” The Court ruled that hyperbolic threats against the President needed to be differentiated from legitimate threats. Online, it can be extremely difficult to understand the context of comments and things can easily be taken more or less seriously than they should. The reactions of an online audience will also differ drastically from the reactions of a physical audience. Unlike in the physical world a comment is made to a predetermined audience with a certain, somewhat known mentality, postings online are generally made to a more unknown audience that can actually change and interact with the post after it has been said. Whereas a bunch of nutjobs won’t be able to bust into a student discussion after someone makes a joke about an assassination and then elaborate on a real plan, radical internet users could come across a joking post and then take it in an unintended direction. While federal investigations would hopefully be able to determine the nature of the comments, its an uncharted area. There’s a lot more unknown factors at play on the internet.

Hey Guys. Sorry, You Weren't My Intended Audience.

Let’s Wrap It Up

Essentially, what all of these issues come back to is an issue of accountability. For whatever reason, we tend to think of the internet as a place separate from the physical world. Its the same ridiculous mistake John Perry Barlow made in his A Declaration of the Independence of Cyberspace and it affects us everyday. You can claim that there’s no government to stop us or to censor us, but let’s face it, as long as you still live on this planet, there’s someone who can punish you for what you do. While many of the things that happen online may stay confined to cyberspace, there’s certainly no guarantee of such a thing. Because so much of the stuff that’s said on the internet is so inconsequential, some people might start to think there are no consequences online, but that’s obviously not the case.When you post something, it can be read, it can be discussed, and you, the physical you, not some avatar or username may have to pay the consequences.

Legal Consciousness – by “Preben A”

One of the strange things about the Internet in 2011 is that it merges the wild-west, free expression culture idealized by John Perry Barlow with the professional world, where public image is important.

Moot, founder of 4chan, expressed the importance of anonymity and a safe space to express “wrong” views, without causing harm to society. In contrast, Facebook has attempted to increasingly merge all other websites with Facebook in order to create an online identity for each user. Furthermore, employers often look at a job applicant’s online identity as part of a background check. These websites present the opposite ends of how an individual can interact with others on the Internet and how the individual will be held accountable for those actions. There are many websites that have less anonymity than 4chan yet do not link to an individual’s real name. While these Internet cultures seem to be opposing, the users on websites are not separate. Many Facebook users also use 4chan and there are many sites with similar as these iconic websites. These worlds are not in intense conflict with one another and users often change their behavior based on the website they are on. However, when these cultural norms are broken, and people deviate from the accepted behavior on a given website, the law is often invoked by an individual who feels that his/her rights have been violated.

Most of the anonymity/privacy issues everybody’s talking about boil down to a conflict between the culture of free expression and the professional world. The things we say online and the things people say about us online can both tarnish our reputations in meat-space. As such, this affects anybody who uses the Internet.

However, I think the notion that the Internet is suddenly becoming “srs business,” that real-world laws are increasingly enforceable on the Internet, is highly sensationalized. We are told “Lawsuits against Web Trolls are on the rise,” but what does this mean?

Each individual has a different relationship with the law. Most people (with the exception of legal professionals) rarely come in direct contact with the law, and when they do, it means something is deeply wrong. Individuals interact with the law on a daily basis, but their interactions are based on their conception of the law, not the actual written law. A person’s legal consciousness is based on their conception of what the law can do for them and when it is applicable. This varies immensely amongst individuals. When someone experiences hateful comments on the Internet, the law may be the last thing that comes to their mind. But there are some people who view the law differently — as a tool for control, a way to squeeze personal benefit out of any situation.

These are the people that sue YouTube commenters. Therefore, these widely publicized cases don’t necessarily reflect anything about the Internet itself changing. The cases do set a precedent for future cases but they do not mean that everybody on the Internet is going to start lawsuits. Many people do not have the financial means, time, desire or investment to sue. All that this means is that certain types of people have discovered new opportunities to exercise their power. This is especially true in the case of CyberSLAPPs.

Efforts to “civilize the Internet” through litigation seem impotent for another reason — anonymous comments online carry little weight. If a newspaper ran articles containing the kind of defamatory language blogs and forums habitually use, the potential damage to the victim’s reputation would be much higher. People trust newspapers more. As result, the chaotic nature of the Internet is to an extent self-contained: if spewing violent and hateful speech is the norm in an Internet community, defamatory comments made in that community have little power to damage one’s real reputation.

Therefore, it seems that these cases have perhaps over-sensationalized an interesting phenomenon, which does indicate a change occurring: the Internet is the process of becoming more regulated. However, the extent to which regulation will be successful both in regards to privacy and anonymity as well as issues of piracy among other things is unclear.

Why tweet? – by “Yael Z”

Celebrity Tweeting: Twitter’s Beginnings

“It’s like Facebook status updates…and that’s it…” This is the way that my friends described Twitter to me when I was a senior in high school and Twitter was just coming out. Appealing, no? We didn’t get it. We thought Twitter would be another fleeting Internet trend. After all, why use Twitter when it does just a fraction of what Facebook does?  Our hypothesis was quickly disproven. It became clear that Twitter was on the Internet to stay. It started with celebrity tweeting. Tweeting became a way for celebrities to interact with their fans as actual human beings. Fans could receive information about celebrities’ day-to-day lives from the celebrity him/herself, rather than from the tabloids. For example, followers of Demi Moore found out on July 14th, 2010.

On October 8th, Seacrest fans found out:

It’s a completely guilt-free stalking tool! Fans hear about celebrity actions instantly and directly from the source. Celebrities are the new Facebook friend, the new cyber buddy. Twitter created a connection between fan and celebrity the way no social networking tool had done in the past.

Thus, the rise of Twitter began. Twitter quickly gained user after user. At first, just to follow their celebrity crushes, but soon to use it themselves. The following chart shows the growth rate of Twitter posts between April and May 2007, just a small sample of the exponential growth of Twitter posters.

(Java et al.)

The Mass Appeal of Twitter: Why?

My question is…why? I understood the celebrity appeal of Twitter (perhaps because of my personal guilty-pleasure love of celebrity gossip), but what is that makes people not only want to tweet their every movement, their every thought, but want to read other people’s movements and thoughts. As my grandmother would say, (and this must be read in a Minnesotan accent)  “it’s just plain creepy the way you kids stalk each other on the Interweb.” She has a point.  Why are we entertained by the location of a peripheral friend, the musings of a second or third cousin?

As it turns out, I’m not the first person to ask this question. Many anthropologists and technology researchers have found Twitter an interesting phenomenon and have conducted empirical studies to try and figure out just what it is that makes Twitter an engaging platform for social interaction. I’ve compiled this information and the following is a list of what I found to be the top 5 reasons people use Twitter:


The number one reason that people use Twitter is to find communities based on personal interests. Twitter’s innovative hash tag (#) system allows people to tweet about certain topics and other people interested in those topics to easily find those tweets and often retweet or comment on them. Thus, Twitter communities are born. The following graphic shows one section of one particular Twitter community, the gaming community.

 (Java et al)

A community such as this one not only shares their feelings on gaming and recent developments within the gaming world, but they also share with each other personal feelings and experiences of their everyday lives. Thus, Twitter extends beyond the boundaries of something like a gaming blog. Users find each other because of a shared love of gaming, but eventually begin to share other news and personal feelings and experiences with each other, creating a virtual community.

Studies also found that within these communities natural leaders emerge as the most reliable and most up-to-date communicants. For example, Scobleizer, a tech geek blogger who tweets about the latest technology news has gathered many followers within the technology Twitter communities. Because he has so many followers, he also connects many different sub-communities within the larger technology Twittersphere and then Twitter users from similar communities can find one another because of a shared Twitter leader. The following chart illustrates the connections that Scobleizer makes between Twitter technology communities.

 (Java et al.)

Twitter communities work like communities in real life – a connection through one person opens up a whole new world of people, ideas, and entertainment.


This topic is a slightly controversial as people have been wary of calling Twitter a news source. However, a recent study showed that over 81% of all tweets refer to some current event. This doesn’t mean that people necessarily get their news from Twitter, but rather that it is a great place to discuss current events with others. The most common retweets (tweeting someone else’s tweet on one’s own Twitter page) are all news sources. The following chart shows the most common topics on Twitter in 2009, based on the amount that they were retweeted:

(Zhao et al.)

All but two of these topics are offline news. This statistic shouldn’t be surprising. The obvious reason for the prevalence of news topics on Twitter is that these topics are universal. Americans can follow Iranians’ thoughts on the Iranian election. Fans of Kanye West can gather by the millions to talk about his latest scandal.

Of course, news junkees could do this before Twitter via blogging, but there is one thing that keeps Twitter apart from these other informal news sources: BREVITY. The 140-character limit keeps up with our quickly diminishing attention span. People don’t need to read an article about Osama’s death when they can find what they need in a quick 15-word statement. In addition, Twitter is updated in real time. People read about the Northeast earthquake this summer before they felt it. The raid on Osama Bin Laden’s bunker was tweeted before the news hit New York Times. Twitter is the perfect way to share news with a generation who can barely sit through Good Morning America.


This type of communication is what I’ll call “water cooler conversation.” Beyond finding communities within Twitter to be a part of, people also use Twitter to maintain a constant flow of communication within real life communities.

The best example of this type of community is the office community. As companies become more and more global, employees for those companies become further and further away from people they may be working with. This expansion of the office changes the way that people communicate with their colleagues. 20 years ago office workers gathered around the water cooler to discuss current events, last night’s episode of Miami Vice, likes, dislikes, etc. Today, employees working together on one project may be split by the Atlantic Ocean, rather than by a few cubicles.

This distance creates a problem in the work environment where team members don’t know each other as human beings. As “Donna,” a participant in a recent study on the effects of Twitter in the work place said: ““I think it makes the person more human, than just professional carbon unit.” Many offices have started to use Twitter as a way to remedy that situation. Employees in China may tweet about their favorite foods or sports teams, and a fellow employee in New York City gets to know him/her better. Thus, Twitter becomes a trans-Atlantic water cooler, allowing people from a specific real-life community to come together via the Internet to get to know one another as human beings (as ironic as that may be).


The fourth reason that people use Twitter is to stay in touch with friends and relatives. The (again) brief snippets of other people’s goings on are a very easy way for friends and acquaintances alike to know what’s going on in someone else’s life. Facebook created a online environment in which it is not only okay, but encouraged, to broadcast what you did last night or over the weekend to friends. Twitter took this one step further. Don’t tell them what you did last night, tell them what you’re doing right now. And people do just that. The following chart shows what people tweet about based on the day of the week.

(Java et al.)

Tweets about school decrease significantly from Friday to Saturday, showing that people are very unlikely to tweet about school over the weekend. While this chart doesn’t prove anything, it shows that people are more likely to tweet about what is happening presently in their lives, rather than yesterday or tomorrow. This way, people don’t have to talk every day to find out what’s going on in their friend’s lives presently. Again, another very easy, non-time consuming way to achieve something that would have taken greater investment ten to fifteen years ago.


The final reason that people use Twitter is relatively new. Twitter is already a water-cooler, a telephone, and a community center; why not make it a bulletin board as well? Two summers ago, Iranians gathered together to protest the June 12 election, and their voice was heard not via posters or even blogs, but via Twitter. News articles called Twitter “the medium of the moment,” and it makes perfect sense. Twitter is free, fast-spreading, brief, and most of all it was invented to connect people to one another. The charts above depict just how easy it was for tech nerds or gamers to get together, why would it be any different for protestors of the Iranian government? Hashtags and retweets make it so easy for anyone starting a protest movement to rally together.

It should come as no surprise that the most recent mass protest movement, the occupy movement in the United States used Twitter to gather attention and supporters. Anywhere you see comments on the movement (even not on the internet) you will see: “#occupy.” Twitter has become a symbol of the movement itself. My father made the neutral comment, “If I wanted to protest something when I was your age I would have had to go door-to-door, put up fliers on every billboard, phone bank, and that would have been just to gather people in my neighborhood. Now you put a little pound sign next to your cause and millions of people are outside the next day.” Twitter has become the easiest way to quickly and effectively spread information across class, country, and ocean. It is a global billboard urging people to tack up their problems every second of every day.


Charts obtained from:

How and Why People Twitter: The Role that Micro-blogging Plays in Informal Communication at Work by Dejin Zhao and Mary Beth Rosson

Why We Twitter: Understanding Microblogging Usage and Communities by Akshay Java, Tim Finin, Xiaodan Song, and Belle Tsang

We’re still hip! How social media is changing the life cycle of the news. – by “Charles G”

The world of journalism is changing fast. Reporters over the last ten years have been thrown into a rather intimidating world pool of social media and networking developments, faced with the prospect of keeping up with the times or else risking their careers. Lightning fast internet connections combined with far reaching influence of internationally integrated social networks has made information gathering and processing more comprehensive- and trickier- than ever. Social media has indeed changed the very essence of the news, from what is reported and how, to what professional journalists do differently, and how this all affects public perceptions and preferences concerning the news.

It's funny because it's true

To fully understand the impact social media is having on the life cycle of news, we must look at what news gets reported and gains traction, how reporters act, and how public sentiment influences the general direction of content pursued by reporters. Along with these structural implications, it would also be helpful to examine how the journalism industry’s foray into the field of social media doubles back to affect the industry and its actors.


What Gets Reported and How 

First and foremost, social media immediately influences what we see and when we see it. Information easily outpaces even the most proficient journalist, and thus the industry has been forced to adapt to a system that would otherwise render professional journalists obsolete. The Arab Spring was only the latest in a line of social movements that exemplifies a simple idea: news can travel faster than anything else, even the newsgatherers. This significant increase in the speed of information exchange has made social media an invaluable- and perhaps the most valuable- form of news reporting.

The image above illustrates the point perfectly: Twitter waves can outrun seismic waves, and all of a sudden the citizens of Virginia get to learn about the earthquake they are to experience before they experience it. The reason for this is social media’s ability to harness the collective eye of an entire Internet population and direct its efforts at exposing the truth, at least to some degree. Although, as seen above, social media has long played a role in the organization of social movements, the mainstream media’s utilization of social media in this respect is new. Instead of fighting prevalent trends in citizen reporting, as seen with the use of Twitter to relayed real time information concerning the Arab Spring among followers and to the outside world.

Suddenly, an average Joe could serve as a CNN I-Reporter by sending in video of breaking news events to be broadcast on the 24-hour news network. But what did this mean for the substance of the content the media’s consumers would receive? This topic has cultivated much debate among the journalistic community, with supporters of citizen journalism indicating that a journalistic trend that allows readers more choice and transparency justifies the possible decline in journalistic quality encountered when professionals are no longer at the front of a breaking story. It is of little doubt that the people of Egypt would not have seen the sweeping changes they have recently encountered without the presence of large, engaged social media networks with users willing to convey real time information to the outside world. The fact that this information stream was partially cut off during the Arab Spring (below), presumably by some entity that felt threatened, further evidences the extreme influence social media can carry in the context of political movements.

Internet Usage- and non usage- in Egypt

The importance of Social media in the life cycle of the news is apparent from the very beginning- it not only redefines what content can possibly be available in real time, but it also allows citizen reporters to refocus our societal priorities by influencing what news is reported at all, and further influencing what gains traction in the traditional media. This cycle continues once the content reaches a professional, as reporters have had to adapt to the changing information landscape as well.

How this affects reporters

Journalists have only been able to survive with, and not in spite of, social media. The vast improvement in information relay speeds that comes with a large, socially engaged online network efficiently utilizes the disparate placement of its discrete members around the globe, thus acting as a virtual “global news net” that catches any story, no matter how good or bad. Herein lies the main argument against the so called “citizen journalist” trend (ie, a sever decline in journalistic quality). This is where professional journalists find their still existent, but possibly eroding, niche, one in which social media will surely play an increasingly important role in the near future, according to the BBC video below.


BBC Social Media Video


Obviously, this emergence of a new aspect of social media will catalyze changes within the journalism industry. First, there are the passive changes, or those that are a natural product of social media’s emergence onto the journalism scene (as opposed to an active effort by media companies to integrate social media campaigns, which I’ll talk about later). The primary changes are related to social media’s speed. If breaking news delivered so fast as to sacrifice quality of reporting is America’s drug, then social media is the enabler. To illustrate the whole “sacrifice of quality” bit, take the example of Amanda Knox, an American woman convicted of a murder allegedly committed during Knox’s study abroad period there. The Daily Mail waited anxiously to report the result of the woman’s murder appeal, and ran a headline proclaiming the announcement of Knox’s guilty verdict just minutes after the announcement. There was just one problem: Knox had been found innocent. The professional reporters got it wrong. In a rush to publish first that is then hyperized by a need to publish before the collective body of internet users can find information through social networks, professional journalists had royally mucked up a very basic reporting job.



In past journalistic generations, this phenomenon carried the popular phrase “never wrong for long,” referencing the 24 hour news cycle’s relative lack of need for accuracy, given that corrections to prior incorrect statements could be made at any time. The risk for committing errors was reduced, and thus a greater premium was placed on speed in reporting. Newsrooms have always moved fast. But social media is forcing them to move even faster, just to keep up with the flow of information over social networks. This increase in speed leaves little time for fact checking, and this often falls by the wayside completely when the relative penalty for a mistake is so small.


Or is the penalty so small? The flip side of the internet age coin with regard to journalism is that mistakes, though easily and quickly correctable, are also preserved on hard drives in web archives forever. Worse yet, the ferociously quick spread of information over social media can take on the personality of a wildfire, with sometimes devastating consequences. Consider the recent hacking of a Fox News Twitter account that led to false reports of President Obama’s assassination. The reports were eventually reported to be the product of vandalism and removed, but not before panic spread through the Twitter-verse. This type of hyper-speed response time, of which only internet-based social networks are capable, reflect simultaneously the greatest asset greatest detriment of social media from a journalistic perspective.


Next, one must examine the active changes in the activity of news organizations in response to developments in social media. Journalists, in their attempt to adapt to the ever-changing landscape of information technology, have in some cases used social media to actively bolster their own reporting activities. Social media has allowed journalism to transform into a dialogue between reporter and reader, and this relationship displays influence in both directions. When asked, reporters of all types indicated that social media had changed the way they interact with readers, thus evidencing social media’s growing importance within the journalism industry.



Since audience participation has now become a staple of online journalism, this relationship inevitable affects not only the reporting priorities of the journalist, but also the content covered. Reporters can now put a call out for information relating to a story and build breaking news coverage around a targeted audience that has proven its investment in the journalism provided by actively engaging it in an online setting. Reporters can use social media to direct their coverage and inform their stories, especially with platforms as extensive as Twitter which allows unlimited access to breaking news events through a vast, global user network.

All you can eat?

Some networks and organizations have gone so far as to place social media campaign agendas at the center of their operating strategies, with Time magazine focusing solidly on amassing Twitter based followings for its various top magazines. The synthesis between traditional media modes utilizing new media and the participatory culture of new media lies in programs like CNN’s I-Report, which allows regular viewers to send the network breaking news stories with accompanying pictures and videos. This form of social media acts not only as a type of new wave marketing for CNN, but also as a means of news aggregation for use during the 24 hour news cycle. News gathering has been relegated as a task for the viewers, because their speed as harnessed by a network cannot be matched. The BBC has outlined similar goals for its social media utilization agenda, outlined in its video below.

BBC Media Kits

Perhaps The Atlantic Magazine has embraced audience participation in the creation of media at the most fundamental level by opening its editing process to public comment. This outright endorsement of the effects of citizen journalism, at least during the editing process, signals an approval of the seminal effects this participation can have on the direction of news content. Users can now refocus the material on which news organization must concentrate their efforts, such as with citizen journalists in the Middle East.  The journalists’ underscore this fundamental shift by acknowledging and apparently embracing new media, with sites such as providing access to the story trends and ideas of reporters that allows user feedback which can help shape story angles and directions. In short, social media has had a dual effect on how reporters operate, both by reshaping the journalistic landscape and allowing reporters to tap into aspects of audience communication and participation that facilitate the growth of reader influence over journalism’s aims and targets.


Changing Answers Change the Questions

With the growth of social media as a method of amassing breaking news, discerning audience opinion and desires, and formulating story angles, the media has begun to change the types of questions it asks and the angle by which it attacks stories.  Primarily, audiences are coming from different places (ie: social media outlets), which naturally affects the marketing strategies (and thus, the types of stories) pursued and pushed by traditional media sources.

With this change in traffic flow comes a change in how media must adapt to find audiences effectively. Integration with social networks is key, though this hyper-connectivity sometimes leads to undesirable instances of information communication. One BBC blog post asks the pivotal question: “What if younger readers start to see their friends as legitimate news sources?” It seems, for many circumstances, that this change has already occurred. This change in reporter-audience dynamic has proved so important as to cause the BBC to refocus its growth agenda with an emphasis on social media.

With all of these significant changes in how news gets reported, what garners attention as newsworthy, how audiences participate with reporters and how this participation shapes the image of modern media, one must ask if the news is better or worse of for the emergence of social media. It is obvious that journalists will have to continue to adapt to this changing landscape as social media becomes more prevalent. More importantly, it appears that the market for journalism no longer necessitates primary discovery of facts- this task has been given to amateur viewers who will provide the information for free through various social networking means. The task of the reporter in the age of social media will be to guide, to provide synthesis where there is only doubt and to shed light on the truth, not just “the facts.” It will also be the task of reporters to ensure that audiences are given fair treatment of what needs to be seen, not just what audiences indicate they wish to see via interactive media platforms. The danger with customized media through reader-reporter interaction is that the news will lose its primary purpose: to inform of the truth. If audiences are given to much sway over the angles and story ideas of future reports, the reports will begin to only resemble the prevailing modes of thought within the audience. It has always been the job of the reporter to challenge the status quo and to facilitate transparency as the best disinfectant. With the emergence of social media, our society waits to see if journalists are up to the task of reinventing themselves in order to face the hurdles inherent in the use of new media systems. Audience participation is here to stay, and this dynamic will obviously influence reporters to produce content tailored toward the engaged audience. This is our dilemma: what we want to hear may not always be what we need to hear. It will be the job of internet generation reporters to help us tell the difference.

Cool Story, Bro: Is Every Citizen a Reporter? – by “Carla G”

Call me old fashioned, but I’ve always liked the pre-Internet romanticism of reading the paper, opening it up with the dramatic movements of an orchestra conductor, scanning through the world’s personal log of events, and placing it neatly under my cereal bowl for further inspection. I get it, though: no one has time for such an extended, ceremonious process anymore. We’re a generation of multi-taskers. And it is much more difficult to check your email, Facebook, Twitter, and whatever you might be actually working on, while having sections of the paper sprawled around you, than if you were just opening another tab on your Internet screen. So, it is really no surprise that, in 2008, for the first time, more people said they get their national and international news from the Internet than from newspapers (see here, for more stark statistics on the newspaper’s future). The issue is not just that one day I’ll have to permanently adjust my eyes to reading from a screen, or work on not spilling Reese’s Coco Puffs on my laptop, but that online journalism has opened a huge can of worms for the question of citizen journalism, redefining who can deliver the news, how they do it, and who checks for veracity.

The Rise of Citizen Journalists

A 2003 seminal report entitled “We Media”, defined citizen journalism as the concept of members of the public “playing an active role in the process of collecting, reporting, analyzing, and disseminating news and information.” To be sure, citizen journalism is a flawed term. Professional journalists are citizens too, and non-citizens are also included in the debate. I’m not interested in discussing issues of semantics though—there are bigger fish to fry.

With the rise of online journalism, barriers to entry in reporting have completely collapsed. JD Lasica classifies media for citizen journalism into the following 6 types: audience participation, independent news and information websites (ConsumerReports, The Drudge Report), full fledged participatory news sites (NowPublic, OhMyNews, DigitalJournal), collaborative and contributory media sites (Slashdot), other kinds of “thin media” (mailing lists, email newsletters), and personal broadcasting sites (KenRadio). We can disseminate knowledge with the click of a button to not just our friends but to a whole breadth of previously imaginable contacts. This sort of access gives the idea of “disseminating information” a new dimension—and puts it in the hands of new agents.

To paint a more concrete picture for you, on the one hand we have every self-proclaimed Matt Lauer or Katie Couric, pontificating, venting, broadcasting, divulging, transmitting, interfacing, editing, applauding, degrading, commenting, “liking”, tweeting and twatting from the comfort and privacy of their own (or probably their parents’) home, in others’ sites or in their own created blogs. On the other hand, there are the true over-achieving, multi-tasking David Lats and Steven Brills of the world, who initiate and conduct a public service by providing new channels of diversified information for more specified audiences, on the side of their everyday lives and careers. [And, at the very second I was going to move on to my next paragraph, I got an email from the Master of Pierson College, of course, informing us that he too has started a blog for his “hotlines.”]

What does this all mean? Journalism can no longer be defined by appealing to the medium (i.e., print, television, radio) or the basic notion of “disseminating information.” We need more concrete standards for distinguishing between the amateurs and professionals.

Haves v. Have-Nots

One way to peg down the legitimate reporters could be the “reasonable person standard”—who would we reasonably argue to be sharing news for the purposes of informing the public at large. This seems like the easy way out, so I’ll dig a little deeper. What factors constitute journalism? First, there is content. We could outline our standards based on whether the information reported is a matter of public concern and important for our understanding. This yardstick might put some entertainment sites in danger. Would Above the Law’s “Hotties Contest” qualify as public concern? Just about every article on Gawker might fail this standard [visiting the site for the first time, the first article I saw was “Kristen Stewart’s wild ‘thrusting’ almost ruined Twilight.” Interesting?] What about FAILblog of other funny blogs? Even if these subjects constitute “interest” for some people, just about everything might be interesting to someone.

Another factor we could consider is truth. Is the supposed “news” accountable and reliable? Has it been fact-checked? Again, entertainment and gossip sites might run into trouble with this standard. Moreover, it is questionable how much we can even trust printed news especially because nowadays, their sources come from the same place as the sources for blogs—Twitter and Facebook (as we saw in the Twittering the US Airways Plane Crash and the Notre Dame student articles).

Similar to truth, a third factor may be having an editor. Putting an intermediary between author and reader not only creates greater accountability, but also gives journalism the perquisite of an elevated discourse among intellectuals. It would imply that a certain degree of expertise is required in the news-reporting process and, most likely, an affiliation with a recognizable news entity.

None of these factors by themselves or even together, seem satisfactory. To me, the best way to define journalism would be by instituting an “intent” standard, similar to the one Georgetown Law student Laura Layton proposes. If your original intention was to gather news and present it in a manner that the public would acquire information then, congratulations, you’re a journalist.If your intention did not exist at the beginning of the news gathering process then it is a bit more questionable (see Von Bulow v. Von Bulow, 811 F.2d 136 2nd Cir. 1987). Sometimes you would not know that a story is in the making until after you acquire the information. In these cases, a more thorough analysis of the other factors, as well as the means and ends of the story are in order. In any case, we should err on defining too much as journalism as opposed to too little. As we saw several weeks ago, the 1st amendment is a sensitive issue that is best not messed with.

Shielded by Shield Laws?

Again, I am not interested in matters of semantics. The way we define journalism is not important because having the title is just kind of cool, but because it comes with certain privileges. We cannot imagine the White House opening the doors of its press conferences to amateurs and professionals alike. Most notably, there is legislation designed to provide a news reporter with the right to refuse to testify as to information and/or sources of information obtained during the newsgathering and dissemination process—we call that “shield laws.” This is akin to the attorney/client confidentiality privilege or the doctor/patient privilege (although, maybe not for MJ’s doctor). The point of these laws is to encourage open communication so that reporters can better do their job of informing the public.

As of now, there is no federal shield law (despite a bi-partisan bill called the Free Flow of Information Act introduced in 2007 and passed by the House in 2008). [Friends at the YDN: Court’s have already found that student journalists are covered. You’re safe.] State shield laws vary in scope and Hawaii is the only state to specifically include whether bloggers are protected by shield laws. Their conditions hold that: (1) the individual invoking the privilege regularly participates in reporting or publishing news of significant public interest, (2) the person holds a position similar to a traditional journalist or newscaster, or (3) the public interest is served by extending the protection of the statue.

Why do we need to limit the scope of privilege at all though? Why can’t a shield law apply to all citizens if we can, seemingly, all report? Floyd Abrams stated, “If everybody’s entitled to the privilege, nobody will get it.” In other words, the court might be able to find counter-veiling social interest in almost all cases if Joe-blogger releases high priority information, ruining it for the rest of us. This is because the societal interest will almost certainly be greater than the interest of you expressing your personal thoughts and feelings on the Internet. Moreover, we see some natural limits in scope. If their skin is not in the game, so to speak, people can saying anything. If someone had ousted Valerie Plame Wilson on Facebook, they would almost certainly be required to disclose their source. They are not a journalist, just an idiot. This is where privacy and journalistic privileges might get a little fuzzy. But, to me the most important reason for limiting its scope, is to preserve some semblance of legitimate media. We need to (i) incentivize the open flow of information and new sources of media while (ii) maintaining the integrity of valuable news.

The following three cases better illustrate how this plays out in practice:

  1.  Remember Jason Chen? Someone found an iPhone prototype at a German Bar in Palo Alto and sold it to Gizmodo who did a story and video report with full disclosure about the new generation of iPhone. Police then raided the bloggers home, searching his computer files to determine if they could put together evidence of a felony. Such a search upon an ordinary citizen seems outrageous, but it is even more unthinkable if it happened against a news organization. [If you did not see John Stewart’s take on the subject, definitely check it out here.] And, in this case, it seems clear that Gizmodo was intended as a source of news. If shield laws do not apply to Gizmodo, which is owned by Gawker Media, then Lat should watch his back.
  2.   Too Much Media LLC v. Hale: Up against defamation and libel, Shellee Hale, a blogger on a campaign against criminal activity in the online adult entertainment industry, was not protected by the shield law because her posts about the software company TMM, were determined to be “nothing more than rants and complaints about her opinions, rather than the dissemination of truth and news.” This court dismissed the “intent” test arguing that, instead, the shield law requires a link to news media, and the New Jersey statue defines the term. While I still think the “intent” standard, along with the other factors discussed above, could have been employed, I agree with the Court’s decision.  Hale argued that her blog was the “Wall Street Journal” of porn. Still her statements were made on a third party site without ever contacting TMM’s representatives for their side of the story. Nothing in Hale’s actions or comments resembled the activities of a legitimate media reporter. If we narrow the scope of shield laws we can incentivize true journalists to meet a quality as well as an accountability standard—something that will certainly be valuable for the purpose of news.
  3. Glik v. Cunniffe, 655 F.3d 78 (1st Cir. 2011): In this case decided in September 2011,a federal court ruled that recording public officials, including police officers, are protected by the 1st Amendment. This decision marks a new open-mindedness in regards to citizen journalism and demonstrates the value of other sources of information that meet a reasonable standard of news.

Extended protections to citizen reporters might further instigate the proliferation of these forms of journalism. This calls for a brief cost-benefit analysis. What could “bad” citizen journalism, or “good” for that matter, be hurting?

Cost-Benefit Analysis

Conclusion and Looking Forward:

Citizen journalists are on the rise. We must implement some standards to both widen and regulate who is protected by certain journalistic privileges. Citizen journalism has costs and benefits. But it is possible to work on mediating the costs. Future challenges will include considering how to encourage better signal-to-noise ratios so that random comments—the ass lobster aficionados—do not drown out the substantive ones.

Increasingly, indicators of quality will matter. What will the peacock’s tail of journalism be? First, the ability to concentrate on analysis more than simply reporting stories. Second, enabling conversation around stories so that they truly become alive. And, third, differentiating products for particular audiences. I predict that soon new business models will emerge for both the blog-phobes (those overwhelmed by too much content) and blog-feens (those who want to participate). People will gravitate to communities they feel more comfortable in. Subscriptions will be introduced as the value of these connections and communities become more conspicuous.

Now, quick reality check: Only 1/8th of the total population get their main news source from the web. They make up about 2/3 of regular users and more than half of the readership of blogs. They dominate social network sites (see more statistics here). At the same time, there are 7 billion people in the world, and about 2 billion are Internet users, or a little over a 1/4th. So, if I’m doing my math correctly (and there is no guarantee that I am), we are talking about 1/32 or about 3% of the world’s population that read news on screens. Surprise? Not quite. If you believe the World Bank (I don’t), more than two billion people are poor—those would be hard-pressed to spend cash on Internet connections of any kind. My point is just that we have yet to fully realize the full potential of the web as a channel for news, whether those that fill the channels are professionals or not. Everyone, chill out—we still have time to figure out how to tailor online and citizen journalists, to help them be fair, accurate, and useful, before some catastrophic pre-mature demise of print news.