Final Project: Search and Seizure Cases Illustrated – by “Will P”

Some of the readings for the search and seizure cases were rather dense, so I made some animations to get across the major points quickly and memorably.  I’ve completed “overviews” for two cases so far and hope to get more in before the final deadline.

Link to YouTube channel:

http://www.youtube.com/user/searchplusseizure?feature=watch

Cases:

Kyllo v. United States

Katz v. United States

 

-Will Pitler

 

Final Project – YouTube & 15 Minutes of Fame – by “Colby B”

(Updated 12/17/2011)

YouTube’s 15 Minutes of Fame

Hey Guys!

Check out this link to our final project! It’s centered around the continually changing nature of YouTube fame and its real-life consequences. This project is meant to explore  ‘heap paradoxes’ like those we discussed in class and the inapplicability of binaries we used to describe fame before the Internet.

We tried to choose videos for our project that show something non-existent in traditional media and therefore unique to online video. Dramatic vlogs, ridiculous online-series, independent musicians and unintentional home videos number among the examples we selected. These videos show that online fame, notoriety, obscurity or any mix of the above is much different than anything we’ve encountered before.

Beyond the changing nature of fame we also wanted to investigate the perspectives that YouTube viewers hold of the content they are watching . We especially wanted to probe into the importance of reality and authenticity to viewers. Our perceptions of authenticity and in-authenticity have become skewed to the point that we pay homage to actors presenting a personality and yet we often ignore the real life consequences that online attacks have on real people. We seem to entertain the notion that much of what we see on YouTube is real, but oftentimes success seems to be either strongly tied to well-performed personas (inauthentic) or correlated with attacks on real-life individuals who end up as victims in the end. Of course there are counter examples, but we think this is a strong trend in viral video.

Anyways, we hope you all enjoy the video!

Make sure to share this with all your family and friends 😉

Happy Holidays!

YouTube’s 15 Minutes of Fame
Josh Eisenstat
Ted Papalexopoulos
Colby Brown
David Meierfrankenfeld

Check out our SOPA Live Blog – by “Nick M”

Hey Guys!
As part of our group project on creating awareness to stop the upcoming SOPA bill from passing, we just wanted to let you know that Charlie, Zach and Nick will be live-blogging the bill’s markup in Congress tomorrow. The session starts at 10:00 am, but we’ll get started at around 9:30, so check in early for insightful coverage and our witty banter.

The link to our blog is here.

The Coming Obsolescence of Your $200,000 Degree – by “Anthony T”

The tweet pictured above was posted earlier this year by Stanford professor and Google engineer Sebastian Thrun. When not busy heading up development of Google’s self-driving car, Thrun has spent the past few months doing something equally intriguing. Thrun, along with several of his colleagues at Stanford, has been part of an experimental  program in e-learning that could one day bring Stanford quality education to the masses, and may even give your $200,000 degree a run for its money.

A brief history of e-learning

In September 2002, after years of planning, MIT made 32 of its courses available for free online in a pilot of the OpenCourseWare project. Today, OpenCourseWare has grown to include 2,080 courses, 42 of which include full video lectures. In the years following its launch, other institutions, including Yale, were quick to follow OpenCourseWare’s lead, launching similar e-learning initiatives. Apple’s iTunesU now acts as a central discovery and distribution platform for over 350,000 higher education lectures from around the world. The effects of this deluge of higher education material can be divided into two categories, the potential it holds to improve traditional education, and the promise it holds for independent learners.

To get an idea of how the availability of open course ware could improve traditional education, consider the launch of Floating University earlier this year. Charged with sharing the expertise of great minds across institutional boundaries, the debut of Floating University was notable in that three schools,  Yale, Harvard, and Bard, allowed students to enroll in the online course for credit, albeit with supplementary work at their home institutions. With this as a model, it is easy to imagine universities in the near future cross-licensing their courses, allowing new or improved courses to be introduced in video form at schools for a fraction of the cost of developing a course from scratch. Of course, Floating University’s courses are not exclusive to the three institutions named above; for $500 anyone with an Internet connection can purchase access to the video lectures. This raises an important question, however: beyond the joy of learning, what exactly are customers receiving for their money? Or more aptly, what are customers not getting?

The problem with independent learners

While the open course ware movement certainly holds potential for expanding and improving traditional education, a quick glance at its usage statistics reveals that its greatest contribution has and will continue to be in the encouragement of independent learning. As of 2010, 43 percent of Open CourseWare’s users were independent learners. For Open Yale Courses that number was nearly 70 percent. But what exactly are and should these students be receiving? The success of these initiatives makes clear the immense unmet hunger for education in our country and around the world. These independent learners take it upon themselves to expand their education, but at the end of the day, though they may be intellectually enriched, have they really been provided with any tools to better their standing in life? Why should we be rewarded for our intellectual pursuits and not the users of Open Yale Courses when, statistically speaking, it is likely that some of them have mastered the material just as well if not better than many of us have. For as hard as independent learners work, at the end of the day they are left with no proof of their growth. As a society, it is in our best interest to expand our limited view of what constitutes educational achievement. We need to be open to the development and acceptance of new signals and certifications of education. This is no easy task, however, and the problem is even greater for new educational players, such as Khan Academy, that lack ties to the world of traditional education. The educational experiments taking place today in Palo Alto, and across the world via the web, are providing a path towards breaking traditional education’s stranglehold on educational certification.

Palo Alto, Everywhere

Earlier this year, Stanford made an announcement that took the independent learning community by storm. As part of its Stanford Engineering Everywhere initiative, three computer science courses would be made available for free online, with a twist that made OpenCourseWare appear primitive. Led by Professor Sebastian Thrun and Director of Research at Google, Peter Norvig (who literally wrote the book on artificial intelligence), the goal of the project was to “change the world by bringing education to places that can’t be reached today.” Inspired by the work of Khan Academy, the pair wanted to take e-learning beyond simple video lectures. In partnership with Know Labs (a start up founded in part by Thrun) the pair developed video lectures peppered with interactive quizzes that are then worked through by the lecturers. In addition, the courses feature weekly homework assignments, a midterm, and a final, all of which are actually graded and given back to students. Registration immediately shot through the roof, with over 160,000 students registering for Thrun and Norvig’s  “Introduction to Artificial Intelligence” course, and tens of thousands of additional students registering for database and machine learning classes run on the same model.

Stanford’s risky little experiment has already been declared a success by many, including Professor Hal Abelson of MIT who helped to develop the original OpenCourseWare nearly a decade ago. Stanford has already announced the availability of 16 new courses for Spring 2012, including several that expand beyond computer science to fields such as medicine, engineering, and entrepreneurship. Still, there is one element of the experiment far more important than the large registration numbers and impressive technological innovations: students enrolled in the courses receive grades. Not Stanford credit, mind you, but a certificate of achievement showing how the student fared against actual Stanford students enrolled in the course in Palo Alto. This changes everything.

The road ahead

While Thrun’s tweet seems to imply that Stanford may soon package the courses and sell them for actual degrees, a development which would itself be revolutionary, the larger implications of Stanford’s experiment have already been set in motion. By issuing students certificates of achievement that clearly document the students’ relative performance, Stanford has pried open the iron grip that university degrees have held as the sole indicator of successful higher education. In doing so, they have not only paved the way for other universities to offer similar certifications, but for players not tied to traditional education to develop certifications of achievement that hold some level of legitimacy in the public eye. That’s not to say that come graduation, we will need to fear online certificates one-upping the ‘YALE’ at the top of our transcripts. Still, it’s not difficult to imagine a near future in which high evaluations on a suite of Stanford online courses and a handful of interesting side-projects could allow a degree-less individual to gain a technical interview at Google.

Peter Thiel claims that we’re in a higher education bubble, and that the disruption of education is in our best interest. I’m inclined to agree. The expansion of high-quality e-learning will bolster our economy. It will help to spur innovation that will keep us competitive on the world stage. It has the potential to finally level the playing field and to genuinely improve lives. E-learning is not going to significantly devalue our $200,000 educations anytime soon. But I look forward to the day that it does.

Generativity and Freedom – by “bashcraft”

Asimov’s short story, “Profession,” gives a unique take on the effectiveness of education systems and the intrinsic human right of freedom; specifically, the freedom of choice.

 

 

His story describes a world where at age 8 children are given the ability to read and 10 years later they are assigned a set of tapes that will determine their profession—removing any freedom people have regarding their livelihood. In “Profession”, the government champions this system because it protects citizens from being overwhelmed by society, destroyed by themselves, and offers a smooth transition to adulthood. During his education day, the day you are given the knowledge of your profession by the government, a doctor told George, “You could be devoured by a subject and if the physical make-up of your brain makes it more efficient for you to be something else, something else you will be.” In essence, the protagonist’s, George Platen’s, problems with this system offer commentary similar to that of the readings regarding appliancization and generative technologies– with no freedom of choice, will the ability to create, think, and advance as a society slowly whittle away into nothing? Will the “smarter” members of society continue to maintain a monopoly on the preeminent technologies, forcing the rest of the world into a stagnant purgatory?

To further the comparison between the government in “Profession” and the dangers of non-generative technologies produced by vanguard companies like Apple, I point to Tread Lightly When Embracing the Mac App Store by Nicholas Deleon. Deleon warns against Apple’s vendetta against generativity, the capacity of a system to allow room for future changes, on its App Store platform; Asimov warms against the lack of generativity in the educational system that leaves children with a defined skill set and not a higher intellectual capacity. Deleon asserts, “My concern, I suppose, is that an over-reliance on, and an over-veneration of, the Mac App Store could lead to an unnecessary, if not dangerous, homogenization of Mac software.” Before I continue identifying parallels, it is important to highlight that there are those who feel Mac produced software is simply the best and safest out there and that their is nothing wrong with this homogenization. In fact, Steve Jobs celebrated it at the release of the iPhone: “We define everything that is on the phone… You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work anymore.”

This statement struck me as eerily similar to the statement the doctor made to George on his education day— he claimed the government was protecting its citizens and making life easier by assigning professions. While there may be some validity to both Jobs’ and the doctor’s comments, it is by no means worth sacrificing the freedom generativity enables.

 

The Broader Implications of Appliancization – by “Benjamin H”

Appliancization obviously has implications for how we interact with the internet; assuming we use tethered devices, we necessarily become consumers without the ability to generate changes to the technologies we use, and it’s hard to think that some innovation will be lost without the tinkering that has been so typical of the internet thus far. Some think that anxiety is overblown or irrelevant, though. Is it really a problem if we lose the tinkering culture that was in part responsible for the growth of the internet if it means greater security and for many more intuitive functionality? Perhaps the debate about generativity extends beyond technology to more philosophical issues like our economy, and texts like Asimov’s “Profession” are also bound up in even greater anxieties about our government that are no longer relevant. Tim Wu, a media historian, pushed back against Zittrain’s arguments about appliancization by highlighting the broader implications of the debate. Although Zittrain is concerned about technology, what really underlies the debate is issues of how we want to envision our economy:

The [internet and the media industry] are in the midst of an epic struggle between two competing economic systems that are almost as different as communism and capitalism. The first, old-school scale economics, is behind most of the media empires of the last century, from NBC to Paramount, and it still powers firms such as AT&T and News Corp. It is a model in which bigger is better, products are standardized, and integration is massive. It is the model that ruled American media–and indeed American industry–for most of the twentieth century. Its rival is a model premised on the economics of decentralized innovation and content creation that we associate with the Internet–loosely, the generativity that Zittrain is talking about. (http://www.nationalreview.com/agenda/244145/chris-anderson-appliancization-internet-reihan-salam)

At some level, then, appliancization is a debate about how our economy should function, and it seems like tech companies like Apple are trending more towards traditional media models. Many industries run on a model closer to that of the traditional media industry, and part of the anxiety surrounding appliancization may be simply a desire to preserve the internet’s distinctive culture. But perhaps there’s no reason to think these two systems are mutually exclusive. Html5 and other technologies might allow people to make iPads and other tethered devices generative.

Asimov’s “Profession” is an interesting example of more general concerns about the loss of our freedom of choice and the ability to educate ourselves. If we can’t choose our paths and learn creatively from scratch, perhaps education will become commoditized like any other product. Asimov’s specific concerns aren’t relevant since we can’t, unfortunately, feed ourselves information tapes, but he seems to be concerned more generally with the loss of our freedom of choice and the notion that some higher authority can construct our choices for us. “Profession” was written in 1957, well into the Cold War, and part of his concern seems to stem from the then-rational fear of Soviet-type domination, as some reviewers have noted (http://www.helium.com/items/1994997-profession-by-asimov). As the doctor tells George, “Surely you know that being interested means nothing. You could be devoured by a subject and if the physical make-up of your brain makes it more efficient for you to be something else, something else you will be. You know that, don’t you?”” This kind of concern for the “efficiency” of society destroying the individual is exactly the kind of Cold War stereotype of the Soviet Union that would have been repeated in 1957. Perhaps it’s just me, but the fact that the evil doctor has an Italian name (Antonelli) and the historian’s name, Ingescu, sounds like the famous Romanian dictator Antonescu indicate at least some kind of post-World War II anxiety about the loss of freedom under undemocratic regimes. Even though Zittrain’s argument against appliancization is obviously not a Cold War argument, concerns about loss of freedom are still a similar aversion to any authority impeding with personal choice and action, just like Asimov’s concern. I’m skeptical that these concerns are legitimate today. Even if we can’t maintain generativity with Html5 and other languages, it would seem to me that security concerns are more of a threat to our freedom—from undue invasion of privacy, perhaps—than the threat to our ability to innovate. Moreover, since a new generation of programmers will have to take over the app-making process at some point, the industry has to maintain some way for people to learn how to innovate, even if its only in their established fashion. Perhaps programming will simply become a more specialized field like medicine and will lose amateurs, but security concerns may currently outweigh the benefits of amateur innovation today.

What’s all the fuss about? – by “Dan T”

I am often impatient with arguments that assume we live in a free-market world, because we don’t – people don’t always think or behave rationally, and competition isn’t always perfectly fair.  But in some cases, imagining that the way things are is just a reflection of market forces can be helpful in understanding a situation.  What if, rather than being marketing victims duped by Apple, Google, or any other software company promoting software-as-service, tethered appliances, or locked-down devices (which you might believe to be the case if you read the comments on tech blogs), people use these services because they want them?  Call me crazy, but maybe the shift toward non-generativity stems from the fact that non-generative devices suit people’s needs, or at least do so better than generative ones.

Apparently, the good guys are now the bad guys

I’ll suggest that for the vast majority of the population, that is true.  Grandma doesn’t need a generative PC.  Grandma needs a device that will help her get the job done, even though she has no idea how a computer works.  It needs to be intuitive and safe, so that even when she clicks the wrong button or performs the wrong procedure, nothing truly bad happens.  She doesn’t want to deal with software that is potentially ineffective or malicious.  The fact that both the number of programs she can run and the number of in-program adjustments she can make is greatly limited is probably of no concern to her.  She just wants secure software that gets the basic job done.

I’ll also suggest that where we are now is a fairly normal and expected place on the timeline of a new technology, rather than the splitting-train-tracks situation depicted on the front of Zittrain’s book.

Dramatic, much?

New technology often starts out generative and ends up non-generative.  The automobile, for instance, started out much more generative than it is today – around the turn of the twentieth century, there were hundreds of hundreds of manufacturers tinkering around with their machines.  While it wasn’t quite as generative as the PC or the Internet, the engine was fairly accessible and the automobile owner could make his or her own changes as desired.  That’s still true today, but to a much lesser extent.  Auto technology has improved to such an extent that most car owners would rather take their car to the mechanic than look under the hood themselves, and manufacturers are increasingly finding ways to monopolize repairs and prevent owners from making their own adjustments.

So what happened?  Around the turn of the century, everybody and their mom was trying their hand at making cars.  This led to a lot of innovation, and some problems: breakdowns were frequent and car safety was a novel concept.  The industry started cleaning itself up around 1930 as people became concerned about reliability and safety.  The number of manufacturers greatly decreased and names like Ford and Chrysler came to the forefront.  These days, most people would not even consider buying a car from an amateur.  But there are still isolated tinkerers out there.

Am I crazy for thinking that what’s happening to software is essentially the same thing?  When a new type of technology shows up, it’s often basic enough that anyone can get involved.  Lots of people do, and some of what results is awesome and pushes the frontiers of the technology.  And some of it sucks. This vanguard is not particularly perturbed with the suckiness, but that all changes once the technology is spread to people who aren’t interested in the technology itself, but rather the benefits it might carry for them.  These people don’t want potential.  They want current value.  They are interested in what the technology can do, not what it could do. Obviously we’ve seen this with the PC and with the Internet.

To cater to this population, hackers started making software and hardware for non-hackers.  It was user-friendly and reliable, and not as generative.  In fact, the more user-friendly and reliable the software, the less generative it was.  This is essentially unavoidable.  Most software is only useful if its capabilities are spelled out and finite.  Make it any more generative than that, and it’s easy for the user to get overwhelmed.

Seriously, this is the population we're dealing with here.

But the reason that we shouldn’t get concerned about this shift is that it represents the expansion of software (or any technology) into a new segment of the population rather than a transformation of the technology itself.  The hackers still want generative software and hardware, and they’re still getting it (more on this in a second).  The people who want a reliable experience are getting that.  So everyone wins.  It’s not a zero-sum game, because the software industry is not monolithic.  What is happening now is essentially consumer discrimination – Apple and other giants are writing software for n00bs, and smaller developers (but also to some extent the giants) are writing software for hackers.

Of course, Zittrain gets this (he quotes critic David Post: “a grid of 400 million open PCs is not less generative than a grid of 400 million open PCs and 500 million locked-down TiVos”).  He has two main concerns: first, that through market dominance non-generative software will eventually swallow generative software; and second, that a generative software world is more advantageous than one that is primarily non-generative.

I don’t think the first consideration is as threatening as Zittrain makes it out to be.  The fact is that the software market is very, very large, and there are many, many niches.  Since hackers write software, and hackers are the kinds of people who will generally want generative and unlimited software, you have to construct a pretty convoluted hypothetical in order to get a world where literally all software comes through corporate giants.

The second concern should be taken very seriously though.  How important is creativity?  Is it more important than efficiency?  Is it more important than security?  Is it more important than giving people what they, on some level, want?  These are obviously big questions.

Come on, Ben... tell us how you REALLY feel.

Ultimately, I just can’t side with Zittrain here.  The primary argument for bolstering the generativity of software is that the advantages it provides in innovation outweigh the negatives.  That argument has a lot of merit, but only for some segments of the population.  How much innovation is Grandma realistically going to produce?  Innovations are always driven by a very small portion of a given population: the curious, creative types, the inventors.  Most people just consume and don’t innovate, and that’s okay.  The 400 million open PCs referred to by Post are probably just as generative as the most innovative 25 million-PC subset.  Most of those PCs are, generatively speaking, dead weight.

1% of the computers have 99% of the generativity... sorry Grandma.

So what’s important is that these different groups have different environments.  The inventors want and need a sandbox to play around in, and if given one, they’ll continue to push the frontiers, develop new products, and make cool new toys.  The consumers want and need the toys.  They don’t want the sandbox; they’re not in it for the mucking around.  If you give them one, they’ll get frustrated, they won’t do anything, and they might even get hurt.

Given that, we don’t want a purely generative software environment.  We want a segmented environment in which the software that is generative is incredibly generative – it is essentially unbounded in its capabilities (with all of the associated problems and complications) – and the software not in that category is safe and efficient first, and generative second.  In this environment, people can choose where they belong, depending on the task, and adjust as necessary.  In other words, we want an environment that is roughly the result of market forces.  Most purchasers will look for safe and non-generative software, so most software will be safe and non-generative.  And enough people are interested in generative software to keep that niche healthy and the innovations flowing.

In order to bring about and maintain this kind of environment, generative software needs to be absolutely sacred.  We’re putting all of our innovation eggs in one basket, so in the generative environment needs to allow for as much creativity as possible.  Jailbreaking most devices should be legally protected.  Of course, the developers of those devices can terminate their warranties and withdraw support, but if users want to experiment with their hardware or software, they should have the right to.  You can purchase a tethered device if you want to, but you ought to be able to “de-tether” it at any time (manufacturers can determine whatever non-legal consequences that will entail from their end).

Future law will require humane treatment of anthropomorphic electronics.

While these changes would definitely help, I think we’re doing just fine now, all things considered.  Yes, software, the PC, and the Internet are all becoming less generative for most of the population, but until they become less generative for everyone, I won’t be sweating bullets, and on the other hand, the security that comes with non-generativity may be just what Grandma is looking for.

Why tweet? – by “Yael Z”

Celebrity Tweeting: Twitter’s Beginnings

“It’s like Facebook status updates…and that’s it…” This is the way that my friends described Twitter to me when I was a senior in high school and Twitter was just coming out. Appealing, no? We didn’t get it. We thought Twitter would be another fleeting Internet trend. After all, why use Twitter when it does just a fraction of what Facebook does?  Our hypothesis was quickly disproven. It became clear that Twitter was on the Internet to stay. It started with celebrity tweeting. Tweeting became a way for celebrities to interact with their fans as actual human beings. Fans could receive information about celebrities’ day-to-day lives from the celebrity him/herself, rather than from the tabloids. For example, followers of Demi Moore found out on July 14th, 2010.

On October 8th, Seacrest fans found out:

It’s a completely guilt-free stalking tool! Fans hear about celebrity actions instantly and directly from the source. Celebrities are the new Facebook friend, the new cyber buddy. Twitter created a connection between fan and celebrity the way no social networking tool had done in the past.

Thus, the rise of Twitter began. Twitter quickly gained user after user. At first, just to follow their celebrity crushes, but soon to use it themselves. The following chart shows the growth rate of Twitter posts between April and May 2007, just a small sample of the exponential growth of Twitter posters.

(Java et al.)

The Mass Appeal of Twitter: Why?

My question is…why? I understood the celebrity appeal of Twitter (perhaps because of my personal guilty-pleasure love of celebrity gossip), but what is that makes people not only want to tweet their every movement, their every thought, but want to read other people’s movements and thoughts. As my grandmother would say, (and this must be read in a Minnesotan accent)  “it’s just plain creepy the way you kids stalk each other on the Interweb.” She has a point.  Why are we entertained by the location of a peripheral friend, the musings of a second or third cousin?

As it turns out, I’m not the first person to ask this question. Many anthropologists and technology researchers have found Twitter an interesting phenomenon and have conducted empirical studies to try and figure out just what it is that makes Twitter an engaging platform for social interaction. I’ve compiled this information and the following is a list of what I found to be the top 5 reasons people use Twitter:

1. COMMUNITY

The number one reason that people use Twitter is to find communities based on personal interests. Twitter’s innovative hash tag (#) system allows people to tweet about certain topics and other people interested in those topics to easily find those tweets and often retweet or comment on them. Thus, Twitter communities are born. The following graphic shows one section of one particular Twitter community, the gaming community.

 (Java et al)

A community such as this one not only shares their feelings on gaming and recent developments within the gaming world, but they also share with each other personal feelings and experiences of their everyday lives. Thus, Twitter extends beyond the boundaries of something like a gaming blog. Users find each other because of a shared love of gaming, but eventually begin to share other news and personal feelings and experiences with each other, creating a virtual community.

Studies also found that within these communities natural leaders emerge as the most reliable and most up-to-date communicants. For example, Scobleizer, a tech geek blogger who tweets about the latest technology news has gathered many followers within the technology Twitter communities. Because he has so many followers, he also connects many different sub-communities within the larger technology Twittersphere and then Twitter users from similar communities can find one another because of a shared Twitter leader. The following chart illustrates the connections that Scobleizer makes between Twitter technology communities.

 (Java et al.)

Twitter communities work like communities in real life – a connection through one person opens up a whole new world of people, ideas, and entertainment.

2. NEWS

This topic is a slightly controversial as people have been wary of calling Twitter a news source. However, a recent study showed that over 81% of all tweets refer to some current event. This doesn’t mean that people necessarily get their news from Twitter, but rather that it is a great place to discuss current events with others. The most common retweets (tweeting someone else’s tweet on one’s own Twitter page) are all news sources. The following chart shows the most common topics on Twitter in 2009, based on the amount that they were retweeted:

(Zhao et al.)

All but two of these topics are offline news. This statistic shouldn’t be surprising. The obvious reason for the prevalence of news topics on Twitter is that these topics are universal. Americans can follow Iranians’ thoughts on the Iranian election. Fans of Kanye West can gather by the millions to talk about his latest scandal.

Of course, news junkees could do this before Twitter via blogging, but there is one thing that keeps Twitter apart from these other informal news sources: BREVITY. The 140-character limit keeps up with our quickly diminishing attention span. People don’t need to read an article about Osama’s death when they can find what they need in a quick 15-word statement. In addition, Twitter is updated in real time. People read about the Northeast earthquake this summer before they felt it. The raid on Osama Bin Laden’s bunker was tweeted before the news hit New York Times. Twitter is the perfect way to share news with a generation who can barely sit through Good Morning America.

3. INFORMAL COMMUNICATION

This type of communication is what I’ll call “water cooler conversation.” Beyond finding communities within Twitter to be a part of, people also use Twitter to maintain a constant flow of communication within real life communities.

The best example of this type of community is the office community. As companies become more and more global, employees for those companies become further and further away from people they may be working with. This expansion of the office changes the way that people communicate with their colleagues. 20 years ago office workers gathered around the water cooler to discuss current events, last night’s episode of Miami Vice, likes, dislikes, etc. Today, employees working together on one project may be split by the Atlantic Ocean, rather than by a few cubicles.

This distance creates a problem in the work environment where team members don’t know each other as human beings. As “Donna,” a participant in a recent study on the effects of Twitter in the work place said: ““I think it makes the person more human, than just professional carbon unit.” Many offices have started to use Twitter as a way to remedy that situation. Employees in China may tweet about their favorite foods or sports teams, and a fellow employee in New York City gets to know him/her better. Thus, Twitter becomes a trans-Atlantic water cooler, allowing people from a specific real-life community to come together via the Internet to get to know one another as human beings (as ironic as that may be).

4. KEEPING IN TOUCH

The fourth reason that people use Twitter is to stay in touch with friends and relatives. The (again) brief snippets of other people’s goings on are a very easy way for friends and acquaintances alike to know what’s going on in someone else’s life. Facebook created a online environment in which it is not only okay, but encouraged, to broadcast what you did last night or over the weekend to friends. Twitter took this one step further. Don’t tell them what you did last night, tell them what you’re doing right now. And people do just that. The following chart shows what people tweet about based on the day of the week.

(Java et al.)

Tweets about school decrease significantly from Friday to Saturday, showing that people are very unlikely to tweet about school over the weekend. While this chart doesn’t prove anything, it shows that people are more likely to tweet about what is happening presently in their lives, rather than yesterday or tomorrow. This way, people don’t have to talk every day to find out what’s going on in their friend’s lives presently. Again, another very easy, non-time consuming way to achieve something that would have taken greater investment ten to fifteen years ago.

5. SPREADING INFORMATION

The final reason that people use Twitter is relatively new. Twitter is already a water-cooler, a telephone, and a community center; why not make it a bulletin board as well? Two summers ago, Iranians gathered together to protest the June 12 election, and their voice was heard not via posters or even blogs, but via Twitter. News articles called Twitter “the medium of the moment,” and it makes perfect sense. Twitter is free, fast-spreading, brief, and most of all it was invented to connect people to one another. The charts above depict just how easy it was for tech nerds or gamers to get together, why would it be any different for protestors of the Iranian government? Hashtags and retweets make it so easy for anyone starting a protest movement to rally together.

It should come as no surprise that the most recent mass protest movement, the occupy movement in the United States used Twitter to gather attention and supporters. Anywhere you see comments on the movement (even not on the internet) you will see: “#occupy.” Twitter has become a symbol of the movement itself. My father made the neutral comment, “If I wanted to protest something when I was your age I would have had to go door-to-door, put up fliers on every billboard, phone bank, and that would have been just to gather people in my neighborhood. Now you put a little pound sign next to your cause and millions of people are outside the next day.” Twitter has become the easiest way to quickly and effectively spread information across class, country, and ocean. It is a global billboard urging people to tack up their problems every second of every day.

 

Charts obtained from:

How and Why People Twitter: The Role that Micro-blogging Plays in Informal Communication at Work by Dejin Zhao and Mary Beth Rosson

Why We Twitter: Understanding Microblogging Usage and Communities by Akshay Java, Tim Finin, Xiaodan Song, and Belle Tsang

Cool Story, Bro: Is Every Citizen a Reporter? – by “Carla G”

Call me old fashioned, but I’ve always liked the pre-Internet romanticism of reading the paper, opening it up with the dramatic movements of an orchestra conductor, scanning through the world’s personal log of events, and placing it neatly under my cereal bowl for further inspection. I get it, though: no one has time for such an extended, ceremonious process anymore. We’re a generation of multi-taskers. And it is much more difficult to check your email, Facebook, Twitter, and whatever you might be actually working on, while having sections of the paper sprawled around you, than if you were just opening another tab on your Internet screen. So, it is really no surprise that, in 2008, for the first time, more people said they get their national and international news from the Internet than from newspapers (see here, for more stark statistics on the newspaper’s future). The issue is not just that one day I’ll have to permanently adjust my eyes to reading from a screen, or work on not spilling Reese’s Coco Puffs on my laptop, but that online journalism has opened a huge can of worms for the question of citizen journalism, redefining who can deliver the news, how they do it, and who checks for veracity.

The Rise of Citizen Journalists

A 2003 seminal report entitled “We Media”, defined citizen journalism as the concept of members of the public “playing an active role in the process of collecting, reporting, analyzing, and disseminating news and information.” To be sure, citizen journalism is a flawed term. Professional journalists are citizens too, and non-citizens are also included in the debate. I’m not interested in discussing issues of semantics though—there are bigger fish to fry.

With the rise of online journalism, barriers to entry in reporting have completely collapsed. JD Lasica classifies media for citizen journalism into the following 6 types: audience participation, independent news and information websites (ConsumerReports, The Drudge Report), full fledged participatory news sites (NowPublic, OhMyNews, DigitalJournal), collaborative and contributory media sites (Slashdot), other kinds of “thin media” (mailing lists, email newsletters), and personal broadcasting sites (KenRadio). We can disseminate knowledge with the click of a button to not just our friends but to a whole breadth of previously imaginable contacts. This sort of access gives the idea of “disseminating information” a new dimension—and puts it in the hands of new agents.

To paint a more concrete picture for you, on the one hand we have every self-proclaimed Matt Lauer or Katie Couric, pontificating, venting, broadcasting, divulging, transmitting, interfacing, editing, applauding, degrading, commenting, “liking”, tweeting and twatting from the comfort and privacy of their own (or probably their parents’) home, in others’ sites or in their own created blogs. On the other hand, there are the true over-achieving, multi-tasking David Lats and Steven Brills of the world, who initiate and conduct a public service by providing new channels of diversified information for more specified audiences, on the side of their everyday lives and careers. [And, at the very second I was going to move on to my next paragraph, I got an email from the Master of Pierson College, of course, informing us that he too has started a blog for his “hotlines.”]

What does this all mean? Journalism can no longer be defined by appealing to the medium (i.e., print, television, radio) or the basic notion of “disseminating information.” We need more concrete standards for distinguishing between the amateurs and professionals.

Haves v. Have-Nots

One way to peg down the legitimate reporters could be the “reasonable person standard”—who would we reasonably argue to be sharing news for the purposes of informing the public at large. This seems like the easy way out, so I’ll dig a little deeper. What factors constitute journalism? First, there is content. We could outline our standards based on whether the information reported is a matter of public concern and important for our understanding. This yardstick might put some entertainment sites in danger. Would Above the Law’s “Hotties Contest” qualify as public concern? Just about every article on Gawker might fail this standard [visiting the site for the first time, the first article I saw was “Kristen Stewart’s wild ‘thrusting’ almost ruined Twilight.” Interesting?] What about FAILblog of other funny blogs? Even if these subjects constitute “interest” for some people, just about everything might be interesting to someone.

Another factor we could consider is truth. Is the supposed “news” accountable and reliable? Has it been fact-checked? Again, entertainment and gossip sites might run into trouble with this standard. Moreover, it is questionable how much we can even trust printed news especially because nowadays, their sources come from the same place as the sources for blogs—Twitter and Facebook (as we saw in the Twittering the US Airways Plane Crash and the Notre Dame student articles).

Similar to truth, a third factor may be having an editor. Putting an intermediary between author and reader not only creates greater accountability, but also gives journalism the perquisite of an elevated discourse among intellectuals. It would imply that a certain degree of expertise is required in the news-reporting process and, most likely, an affiliation with a recognizable news entity.

None of these factors by themselves or even together, seem satisfactory. To me, the best way to define journalism would be by instituting an “intent” standard, similar to the one Georgetown Law student Laura Layton proposes. If your original intention was to gather news and present it in a manner that the public would acquire information then, congratulations, you’re a journalist.If your intention did not exist at the beginning of the news gathering process then it is a bit more questionable (see Von Bulow v. Von Bulow, 811 F.2d 136 2nd Cir. 1987). Sometimes you would not know that a story is in the making until after you acquire the information. In these cases, a more thorough analysis of the other factors, as well as the means and ends of the story are in order. In any case, we should err on defining too much as journalism as opposed to too little. As we saw several weeks ago, the 1st amendment is a sensitive issue that is best not messed with.

Shielded by Shield Laws?

Again, I am not interested in matters of semantics. The way we define journalism is not important because having the title is just kind of cool, but because it comes with certain privileges. We cannot imagine the White House opening the doors of its press conferences to amateurs and professionals alike. Most notably, there is legislation designed to provide a news reporter with the right to refuse to testify as to information and/or sources of information obtained during the newsgathering and dissemination process—we call that “shield laws.” This is akin to the attorney/client confidentiality privilege or the doctor/patient privilege (although, maybe not for MJ’s doctor). The point of these laws is to encourage open communication so that reporters can better do their job of informing the public.

As of now, there is no federal shield law (despite a bi-partisan bill called the Free Flow of Information Act introduced in 2007 and passed by the House in 2008). [Friends at the YDN: Court’s have already found that student journalists are covered. You’re safe.] State shield laws vary in scope and Hawaii is the only state to specifically include whether bloggers are protected by shield laws. Their conditions hold that: (1) the individual invoking the privilege regularly participates in reporting or publishing news of significant public interest, (2) the person holds a position similar to a traditional journalist or newscaster, or (3) the public interest is served by extending the protection of the statue.

Why do we need to limit the scope of privilege at all though? Why can’t a shield law apply to all citizens if we can, seemingly, all report? Floyd Abrams stated, “If everybody’s entitled to the privilege, nobody will get it.” In other words, the court might be able to find counter-veiling social interest in almost all cases if Joe-blogger releases high priority information, ruining it for the rest of us. This is because the societal interest will almost certainly be greater than the interest of you expressing your personal thoughts and feelings on the Internet. Moreover, we see some natural limits in scope. If their skin is not in the game, so to speak, people can saying anything. If someone had ousted Valerie Plame Wilson on Facebook, they would almost certainly be required to disclose their source. They are not a journalist, just an idiot. This is where privacy and journalistic privileges might get a little fuzzy. But, to me the most important reason for limiting its scope, is to preserve some semblance of legitimate media. We need to (i) incentivize the open flow of information and new sources of media while (ii) maintaining the integrity of valuable news.

The following three cases better illustrate how this plays out in practice:

  1.  Remember Jason Chen? Someone found an iPhone prototype at a German Bar in Palo Alto and sold it to Gizmodo who did a story and video report with full disclosure about the new generation of iPhone. Police then raided the bloggers home, searching his computer files to determine if they could put together evidence of a felony. Such a search upon an ordinary citizen seems outrageous, but it is even more unthinkable if it happened against a news organization. [If you did not see John Stewart’s take on the subject, definitely check it out here.] And, in this case, it seems clear that Gizmodo was intended as a source of news. If shield laws do not apply to Gizmodo, which is owned by Gawker Media, then Lat should watch his back.
  2.   Too Much Media LLC v. Hale: Up against defamation and libel, Shellee Hale, a blogger on a campaign against criminal activity in the online adult entertainment industry, was not protected by the shield law because her posts about the software company TMM, were determined to be “nothing more than rants and complaints about her opinions, rather than the dissemination of truth and news.” This court dismissed the “intent” test arguing that, instead, the shield law requires a link to news media, and the New Jersey statue defines the term. While I still think the “intent” standard, along with the other factors discussed above, could have been employed, I agree with the Court’s decision.  Hale argued that her blog was the “Wall Street Journal” of porn. Still her statements were made on a third party site without ever contacting TMM’s representatives for their side of the story. Nothing in Hale’s actions or comments resembled the activities of a legitimate media reporter. If we narrow the scope of shield laws we can incentivize true journalists to meet a quality as well as an accountability standard—something that will certainly be valuable for the purpose of news.
  3. Glik v. Cunniffe, 655 F.3d 78 (1st Cir. 2011): In this case decided in September 2011,a federal court ruled that recording public officials, including police officers, are protected by the 1st Amendment. This decision marks a new open-mindedness in regards to citizen journalism and demonstrates the value of other sources of information that meet a reasonable standard of news.

Extended protections to citizen reporters might further instigate the proliferation of these forms of journalism. This calls for a brief cost-benefit analysis. What could “bad” citizen journalism, or “good” for that matter, be hurting?

Cost-Benefit Analysis

Conclusion and Looking Forward:

Citizen journalists are on the rise. We must implement some standards to both widen and regulate who is protected by certain journalistic privileges. Citizen journalism has costs and benefits. But it is possible to work on mediating the costs. Future challenges will include considering how to encourage better signal-to-noise ratios so that random comments—the ass lobster aficionados—do not drown out the substantive ones.

Increasingly, indicators of quality will matter. What will the peacock’s tail of journalism be? First, the ability to concentrate on analysis more than simply reporting stories. Second, enabling conversation around stories so that they truly become alive. And, third, differentiating products for particular audiences. I predict that soon new business models will emerge for both the blog-phobes (those overwhelmed by too much content) and blog-feens (those who want to participate). People will gravitate to communities they feel more comfortable in. Subscriptions will be introduced as the value of these connections and communities become more conspicuous.

Now, quick reality check: Only 1/8th of the total population get their main news source from the web. They make up about 2/3 of regular users and more than half of the readership of blogs. They dominate social network sites (see more statistics here). At the same time, there are 7 billion people in the world, and about 2 billion are Internet users, or a little over a 1/4th. So, if I’m doing my math correctly (and there is no guarantee that I am), we are talking about 1/32 or about 3% of the world’s population that read news on screens. Surprise? Not quite. If you believe the World Bank (I don’t), more than two billion people are poor—those would be hard-pressed to spend cash on Internet connections of any kind. My point is just that we have yet to fully realize the full potential of the web as a channel for news, whether those that fill the channels are professionals or not. Everyone, chill out—we still have time to figure out how to tailor online and citizen journalists, to help them be fair, accurate, and useful, before some catastrophic pre-mature demise of print news.