The Video Game Industry and DRM – Time for a Change – by “Ryan B”

While the online music industry is shifting away from utilizing DRM, or digital rights management, the video game industry continues to employ restrictive DRM technologies. Although these two industries are certainly not identical, video game companies may have some important lessons to learn from recent changes made by the major music labels.

Over the past couple of years, the major music labels have agreed to do away with DRM restrictions of digital songs sold through online music stores such as iTunes. DRM has proven to be an unsuccessful strategy for the online music industry, failing to stop the piracy of songs while driving away honest consumers who were frustrated by the lack of control they had over purchased music content. The subsequent decision to eliminate DRM restrictions was applauded by consumers, who are now purchasing online music more than ever. The lesson was a simple one: making your consumers happy is a good business move.

This concept, however, seems to have slipped passed some of the leading companies of the video game industry. Three recently released, high-profile games, Assassins Creed 2, Command and Conquer 4, and Silent Hunter 5 have a new form of DRM protection called “always-online DRM” that is frustrating even the most loyal fans of these franchises.


Always-online DRM requires a user to be online at all times in order for the game to run. On the one hand, this allows video game companies to combat piracy by periodically checking that a user is a valid one. On the other, this can create a truly dissatisfying gaming experience for those consumers who have legitimately purchased a gaming title.

For example, if a user is playing EA’s Command & Conquer 4 in single player mode, which does not require an internet connection, they can be booted from the game and lose the progress they’ve made on their mission if their internet connection happens to go out. One reviewer of the game complained that even without losing the connection, “the spectre of catastrophe hung over [his] head like a razor sharp guillotine.” Even worse, if a user happens to be somewhere that lacks internet service, he simply cannot play the game at all.

While always-online DRM might make sense in some future world in which an internet connection is available everywhere with constant reliability, this is clearly not the case today. Instead, always-online DRM punishes the very consumers who are keeping video game companies in business, those who have legally purchased the games. This approach to DRM, which holds combatting privacy above the gameplay experience of consumers, has proven to be incredibly damaging in the past. For example, another EA game, Spore, was released in late-2008 with DRM restrictions that prevented users from installing the game on more than three machines. In addition, Spore required users to verify that their copy was legitimate each and every time they went online. Users became so frustrated with this highly restrictive DRM system that piracy became rampant, landing Spore at the top of the list of most pirated games for 2008.

The lesson here is that video game companies must be sure that they don’t employ DRM technologies that are so restrictive that they end up alienating the very consumers upon which their businesses depend. While doing away with DRM protection entirely may not make sense for the video game industry, keeping consumers satisfied must be prioritized above and beyond keeping pirates at bay. Otherwise, once-loyal customers may end up joining the pirates in droves.

Who’s Responsible for Harassment? – by “Shirley B”

Harassment is no new phenomenon on the Internet.  As astutely stated by Penny Arcade, normal person + anonymity + audience = Total ****wad. (Don’t click if cursing offends!)  But for websites that present a platform for user-generated content, harassment has presented interesting new problems.  Are these websites liable for user content?  Are they simply providers of the platform on which users place content or are they themselves content providers?

Two years ago a Twitter user was harassing a woman named Ariel Waldman over the social networking website.  After submitting a complaint to Twitter, an administrator removed the tweets from the public time line.  However, when Waldman asked that the user be banned, she received a reply that the comments were not illegal, simply malicious, despite the fact that Twitter explicitly lists harassment as a violation of the Terms of Service, and reserve their right to remove content and users in violation of their Terms of Service (though are not obligated to).  Twitter promised to revise its Terms of Service to clarify its enforcement policies, but would do nothing further, for fear of a lawsuit.  Jason Goldman, the product manager for Twitter, stated in a post at getsatisfaction.com that “As a communication utility, Twitter does not get involved in disputes between users over issues of content except in very specific situations. Twitter is a provider of information, not a mediator.”  In the same post, Goldman also stated that the company was working on improving its blocking feature, which could solve the problem of harassment.  Problem solved.  The issue was only with Twitter’s ToS, its enforcement of it, and their insufficient blocking tool.  Not quite.

John Dozier Jr., an expert in Internet law, says that Twitter may have surrendered its safe harbor under the Communications Decency Act when it edited the user’s page, which it did when it removed it from the public feed.  “If they’ve edited content based on their subjective perspective, they put their immunity at risk and virtually their entire online business, because then they’d be liable to defamation claims or anything else that a publisher would,” Dozier says.  What if a website had knowledge of the harassment – had seen it, had watched it continue, and were fully aware of the problems it was causing – but did not take any action?  Could this lack of action, this implicit approval (or at least lack of disapproval) of the content entangle them in the same liabilities that publishers face?  Madeline Rodriguez takes this a step further in her article “Reexamining Section 230 of the CDA and Online Anonymous Speech: Defamation on the Internet and the Websites That Facilitate It.”  She says, “if an ICP [internet content provider], and certainly an ICF [internet content facilitator], has reason to know, or anticipate, that at least some of the postings on their website are defamatory, false, anonymous, annoying, or harassing in nature, then they will be considered a co-author of the defamatory, false, anonymous, annoying, or harassing postings, and just as liable as the original poster, losing their CDA 230 immunity completely, and be subject to full monetary, legal, and equitable damages by an aggrieved party.”  This seems to suggest that Twitter is co-author of the harassing posts, and as a creator, could be at risk of a lawsuit.

Malcolm Coles, a UK blogger who believes that Italy made the right decision in deciding against Google, argues that these Google executives are absolutely responsible for the content published on their website, because they are responsible for the systems that police that content.  While the problem that Madeline Rodriguez presents only seems relevant to cases in which the website owners have actual knowledge of illegal activity, Coles suggests that Google indeed should be able to have actual knowledge of every incidence of illegal activity on their website.  He blames Google for not having an adequate policing system, since the video was taken down two months after it was posted (though only two hours after an official request by the Italian police).  Coles claims that Google facilitated the posting of the content through its negligence, and that it “has ongoing control over that platform” which involves it in content publishing in a way it does not a “camera-maker or tissue-box manufacturer.”

Harassment presents a new problem for user-driven content websites.  It is difficult to find, unlike copyright infringements, often ambiguous, and is obviously a sensitive subject. It also involves publishing original opinions and user-generated content in a way that infringement does not.  While I certainly don’t think that websites such as Google, YouTube, Twitter, or the New York Times (for its comments sections) should be held liable for what others post, it is interesting to look at the fine line between facilitator and provider that these sites are walking in the case of harassment.



Google, Corporate propaganda, and the future of P2P – by “Sebastian P”

Last week, Google and bloggers all over had a fit about this ruling, a ruling in which three Google executives were convicted and sentenced to a 6 month suspended sentence (which is apparently legalese for “you’re wrong, we’re right, still feel free to tour our country”) for violation of Italian privacy laws as a result of a video uploaded onto Youtube that depicted a Down Syndrome Autism (apparently the media got this confused. The media getting something wrong? Could this be a trend?) teenager being taunted and beaten up. Why were people so upset with a ruling like this? Clearly if these people at Google allowed for such an atrocious video to be uploaded, they’re evil individuals who need to be put in jail… right? Well, sure. If they were involved and responsible. However, the uproar stems from just how little involvement these convicted “felons” had. They collectively had no idea the video was uploaded until after it was taken down, no involvement with the uploading process of the video (it’s completely automated on Youtube), and no involvement in the creation of said video.

Are they really at fault? Common sense says no. In fact, common sense questions the legitimacy of the Italian court system after such a ruling.

But wait. A group of individuals convicted for something they had no control over, worked with police to remove specific material, and really were too busy to know of everything that was on their website? Let’s take a look at The Pirate Bay.

When The Pirate Bay Trial took place a year ago, one of The Pirate Bay’s (TPB) stronger legal claims was that they provided a service similar to Google: they indexed the existence of so-called .torrents as a search engine and that the content indexed was unknown to the website’s owners.

An aside: My buddy Mike mentioned that there’s definitely a contingency of people who don’t believe Google and thePirateBay to serve the same function. If you’re in that contingency (or if you’re not, but feel the need to click on URLs anyway), check out thePirateGoogle.com (EDIT#2: Seems as though part of the website’s Google script functionality goes down once in a while. Wonder why. If you want to try the same functionality of thePirateGoogle, preface your search keywords with the phrase “filetype:Torrent”).

ThePirateGoogle
Because if thePirateBay had just changed their name we wouldn't have any problems...

Additionally, TPB had no involvement in the creation of said files (legal or illegal), no involvement in the uploading of said .torrent file (it’s automated as well), and cooperated with the police when asked to take down explicitly illegal material for which they had proof of (Sweedish Police + Child Porn + Pirate Bay The Tagline is right. It’s really much a do about nothing).

Sounds a lot like what Google is having fits about. So when Matt Sucherman, Vice President and Deputy Legal Counsel for Google says, “Common sense dictates that only the person who films and uploads a video to a hosting platform could take the steps necessary to protect the privacy and obtain the consent of the people they are filming,” doesn’t it indicate that common sense dictates that only the person who films and indexes their illegal movie recording on an search engine platform could take the steps necessary to protect the privacy and obtain the consent of the movie producers who made the original film? I contend common sense does indicate that.

So why then does the public view Google so favorably, yet view services such as thePirateBay as demonic?

Misinformation: The Prosecution’s Strongest Argument

In October of 2007, Alan Ellis, a 24-year-old man from Middlesbrough in the United Kingdom was arrested, his computers seized, and then questioned for many hours. This man was the creator of OiNK, which according to the BBC was an “extremely lucrative” website that:

“The site allowed the uploading and downloading of pre-release music and media to thousands of members.”

An IFPI spokesman said: “Once an album had been posted on the OiNK website, the users that download that music then passed the content to other websites, forums and blogs, where multiple copies were made.

“Within a few hours of a popular pre-release track being posted on the OiNK site, hundreds of copies can be found further down the illegal online supply chain.”

Oink
Oh Oink... You're still missed by music lovers everywhere

Now I love the BBC, almost as much as I love NPR (seriously. Great, rather unbiased news sources). But I jumped out of my chair when I read this in class during my junior year in high school. I knew then what others know today: All that stuff is wrong.

Let’s go down the list of things wrong with that IFPI statement:

  1. OiNK was extremely lucrative.

By far one of the funniest claims. OiNK wasn’t making any money. In fact, for most months, it was losing money (not losing a lot of money, but definitely losing money). There were no ads on the website, so you couldn’t make the argument that he received ad revenue. So the only real “income” source that OiNK had was that pink donate button in the upper right hand corner. If someone has a business plan that’s rooted in an unintrusive donate button (that’s about 400 pixels in size and found on only some pages of a website in the upper right hand corner) that’ll be extremely lucrative, please e-mail me. I’d love to start a business with you.

2. OiNK uploaded pre-released tracks and distributed it them to hundreds of websites.

So funny thing about this claim. OiNK actually had a strict terms policy that demanded high quality music found in album form only. Note a couple of things. A) High Quality and B) album form only. The problems with that claim about the pre-release track is three-fold: For anyone who rips music into MP3, it’s hard getting  a high quality rip unless you have the actual medium (IE the CD). Second, because of the stipulation for high quality music, pre-released music that comes from a non-traditional medium (IE leaked by an insider), would not have received the final remastering that most music goes through, thus would not meet the quality standards of OiNK. Lastly, if OiNK stipulates that only Album torrents be shown on the website, how would single pre-release tracks be blamed on OiNK?

These two things are among many facts that spread to the general public by IFPI through legitimate new sources, but were grossly inaccurate. So inaccurate that Alan Ellis was A) released from prison after hours of testimony on how his software worked and B) found unanimously innocent by the courts on January 15, 2010, 2 years after the IFPI heralded a great triumph for the music industry.

In the OiNK story, you find a tactic the music and movie industry used by the industry: Corporate propaganda. Whatever, you call it, this misinformation severely hurts potentially innocent case-law, especially in a jury trial. For laws involving such technologically sophisticated mechanisms such as BitTorrent, how would any even-minded, yet unbiased individual differentiate between the facts and the lies?

They don’t. Or rather, it’s very hard to. This same misrepresentation of technology information happened during The Pirate Bay trial as well. The evidence presented by the prosecution during day 5, in which evidenced previously unshared with the defense that depicted inaccuracies in the sharing of files (they had a screenshot of a client, yet no information that explicitly links the material to thePirateBay among other things), exemplifies this discrepancy. If the jury/judge doesn’t understand how tracker information works, how would the defense respond correctly? It becomes increasingly difficult to. Thus, the prosecution gains the advantage for doing such a thing.

When cases such as Alan Ellis are broadcasted as a triumph of good vs. evil, but hold factual inaccuracies, what’s the file sharing website to do? When the court doesn’t understand how magnet linking through bittorrent clients don’t involve the tracker host (FYI, they don’t), how do bittorrent trackers claim safe haven? OiNK is dead and it won’t be back despite the fact Ellis was found innocent. When organizations such as the IFPI gain such an advantage to misrepresent specific facts or to simply tackle cases knowing individuals lack technological knowledge, we all lose.

What the future holds:

“Experts” from all over will tell you that the death of MiniNova means this for P2P and that the verdict of thePirateBay case will mean that for file-sharing, but in reality, nobody’s really sure.  P2P is a magnificent tool for sharing information. Clearly, too much information for many music and movie industry advocates. However, through all of this some facts remain clear: Peer to Peer networks provide a valuable resource in terms of crowd-sourcing bandwidth consumption. Gaming companies such as Blizzard have been doing this for years now (Case in point: If you have millions of people trying to update their World of Warcraft clients with a file that’s a few hundred megabytes, you’d use a form of bittorrent too). Twitter has also turned to Bittorrent as a method of managing their bandwidth.

I’m of the personal opinion that legal methods always seem to pre-empt illegal concerns. As the Sony vs. Universal Case showed with VCRs, generally positive uses out shadow the negative ones. Sure, business structures change, but that’s all part of how the world works.

-Sebastian

Leaking Through the Cracks – by “Michael C”

Last week, the Federal Trade Commission (FTC) notified close to 100 companies that their employees have been sharing sensitive customer and employee data on P2P file-sharing networks. The FTC did not release the list to the public, but FTC chairman Jon Leibowitz stated that “companies and institutions of all sizes are vulnerable to serious P2P-related breaches, placing consumers’ sensitive information at risk,” so it is quite possible that some bigger companies were affected, especially those who have many employees who all use computers.

In recent years, with more companies distributing laptops instead of desktops to employees, the problem has grown worse. Users can now take their work home with them, and are therefore more likely to mix their business and personal lives on the same computer, installing file-sharing programs to download their favorite songs, for example. After installing these programs, users are asked what folders they would like to share, and it’s easy for a casual user to breeze through this dialogue, accidentally selecting their documents folder containing all of their (and the company’s) sensitive files.

LimeWire share folder selection
LimeWire share folder selection

This problem has actually become so common that there are “cybercrime gangs” that are dedicated to searching P2P sites to obtain sensitive documents. In their investigation, the FTC reported that they easily found everything from financial records to social security numbers, perfect for use in an identity theft. To test this, I fired up LimeWire, clicked on “documents” (in the search categories) and typed in “taxreturn.pdf.” Sure enough, after only a few seconds, I was able to download someone’s electronic tax return (completed in TurboTax), complete with financial information and social security number! After my experiment, I had no doubts in the FTC’s claims.

Searching LimeWire for taxreturn.pdf
Searching LimeWire for taxreturn.pdf
The downloaded tax return
The downloaded tax return (redacted by me)

The FTC has been pushing for tighter regulation of P2P software for years. They are in favor of legislation that requires P2P file-sharing programs to provide clearer notice about what files are being shared, and make sure that consent is obtained to share those files. Many industry watchers see P2P traffic growing exponentially (by some estimates, up to 400%) in the next few years, so the problem will only get worse. To address this, the government has finally acted—H.R. 1319, or the “Informed P2P User Act,” has been passed by the House and is currently being reviewed by a Senate committee. It would let the FTC place civil penalties on the P2P program distributors who do not meet a certain standard of providing “clear and conspicuous notice, in advance” to users about what specific files and folders the P2P program will share, and obtaining consent from the users before sharing those files on the public P2P network. There is also another bill being reviewed by Senate committee, H.R. 4098, or the “Secure Federal File Sharing Act,” which would require the government to issue rules regarding the use of P2P software by government employees, the people who are likely to have confidential information on their computers.

The issue of accidentally sharing confidential files via P2P has existed for a long time, probably since the advent of Napster. Yet, over ten years later, the Senate is only now considering legislation to make P2P programs safer. Why the delay? It seems like the government ignored the problem for a long time due to the illegal nature of P2P. Perhaps they felt that by creating legislation regarding P2P file sharing, they would be legitimatizing these programs. At this point, however, it is clear that P2P file sharing is not going anywhere anytime soon, and it has become mostly accepted in our society. Therefore, I’m glad that the government is taking steps to make P2P software more secure, rather than just shoving the problem under the rug and hoping that it goes away.

Tragedy of the Digital Commons? – by “Merlyn D”

**Note: For my blog post, I will focus on music in P2P, although P2P technologies are also used to share books, software, and personal files.

There was an era just a few years ago when P2P was the way to get music for cheap.  P2P, which is a network architecture system through which users are both consumers and suppliers, filed into the mainstream with the advent of music sharing.  Back in 1999, Napster up-ended the way music could be perceived as a “private” good when it became easy and free to obtain music.  Over the last decade, P2P technology has grown to enable the mainstream public (the more the better) to obtain music for free. As music is increasingly more characterized as a public good (that is, the only real “price” of obtaining music is the small amount of bandwidth necessary to download), will there be a tragedy of the digital commons? [the link provides background info], and how does anyone prepare for it?

The landscape of P2P has changed.  Interestingly, The NPD Group research company announced that in 2009, there was a 25 percent decrease in illegal file d

ownloading in the U.S. on P2P web

sites.  Interestingly, NPD reported that despite the decrease in file downloading and move toward ad-supported music web sites, the “musical industry saw 24 million fewer legal

music buyers in 2009,” which included, “1 million fewer buyers of music downloads” (CNET).  This trend means that not only are music industries losing money, but the file sharing rates (including, but not limited to music) have fallen as well.  With falling levels of use of legitimate forms and illegal forms, is there a winner?  How can there be a winner?  Is there a better way to balance social surplus (the pleasure of having music) with the surplus of a company (revenues)?  The grave question is: will there be a point at which neither music producers nor users will be incentivized to keep producing music for consumption?

This could mean several things: 1)discouraged by the now “expensive” costs of official music, consumers opt out of buying; 2)accumulated files from P2P sharing means there are fewer reasons to purchase music; and 3)consumers have found superior options.  Undoubtedly, the trends seem to suggest that there is a move away from ownership of music, whether it’s legitimate or pirated.

In our readings, we discovered that the battle over the ownership of media is not new.  In fact, during radio broadcasts were treated similarly until collective licenses were distributed.  Without a way to eliminate such illegal web sites, music producers/distributors are forced to come up with new ways to bring up revenues.  The models today for how music can be distributed generally fall into three areas (please refer to Fingertip for specifics), which include free music, a fee for music (access), and a monthly “bill” for music.

The model that Matt brings in seems to suggest there is some sort of equilibrium through which, the resources that we have now charged at a fair price will create an effective sharing of music that somehow maintains quality, profits, and demands.   If NPD’s numbers are correct and the trends will continue to demonstrate that p2p utilization is falling along with music industry revenues, that means that “choosing the right package” may not be as feasible (or a stable solution).  As distribution rates fall with musical industry revenues, any “model” to fit does not appropriately predict for future environment for music sharing.  Even if music industries were able to obtain collective licenses to create such pricing models, I question whether there is truly a stable state where the profits of these companies will be able to large enough to compensate for falling demand.

P2P, Piracy, and other things – by “Matt A”

A comparison of battery consumption between two individuals.

Peer-to-peer networks have come under a lot of fire over the past few years. Some of them have been able to slide under the radar by being very selective about their members, others have faced serious litigation, and a few have won their battles by squirming into the light of the DMCA’s safe harbor. One thing that hasn’t changed, however, is the technology. BitTorrent clients are freely available for download, and can be found easily with just minimal effort. P2P use scales beautifully: the more people using the service, the faster the downloads. And ever so slowly, its beginning to catch on as more and more legitimate means for distribution. Twitter is launching a new service based on BitTorrent technology to help lower the downtime on their central servers; instead of sending out updates to their various servers world-wide and rely on a lengthy direct download process, they will be using their servers as seeds in their own private swarm (it should also be noted that this project is named Murder for the flock of crows, not premeditated homicide). While this doesn’t involve the public as direct seeders, it still is a very valuable look at the legitimacy of P2P downloads. Another example, and indeed one of my favorites, is that of World of Warcraft, which has been using P2P technology to deliver its content for years. Granted the system isn’t ideal (it only seeds while downloading to avoid posing any undue bandwidth issues during play) and trying to deliver half a gigabyte of information to over 13 million people (or whatever the current number is) simultaneously without dedicated seeders upon completion isn’t the ideal system, but it improves download speeds regardless.

Society will find a way to fill its needs.

Now lets talk about piracy a little bit. Piracy has been intimately connected with P2P ever since it strolled out into the daylight. Lots of people with the same idea in mind using software that connects them, with the added benefit that the more connected they are, the faster their downloads? A match made in heaven. But P2P and its use for file-sharing runs into some moral boundaries and arguments. Is file-sharing a new distribution method or just plain stealing? Is it fighting the powerful ‘evil’ corporate machine or killing music and creativity? Then the economics arguments come in. Music can’t work this way, it will never happen! Or, music is a cultural phenomenon, it won’t respond to market forces. But this isn’t true. Economics works as it always does.

Digitalizing creates an ‘infinite supply’ (perhaps limited only by your bandwidth) and drives costs of production to near zero (or in terms of bandwidth, rent-sharing occurs when downloaders provide the bandwidth through seeding), then the price at which individuals are willing to purchase it becomes zero. Economics says free music is the way to go, its just an ugly/frightening answer for some. I agree that free music can’t work, or at least completely free music. People talk about Pandora (which I consequently love) and Last.fm, but until a reliable wireless network can stream it to me wherever I want, then this cannot be a replacement. The best approach I have seen thus far is the EFF’s. In a nutshell, utilize the technology that is already here, utilize the systems that are already in place, and charge an amount that people are willing to pay. Society will fill its needs in whatever way necessary, even if its illegal. To avoid piracy, simply provide that same service at a reasonable price. Legalize a system, and then constrain it to fit a model. Use P2P for distribution and hosting, allow members to upload what they want, and use share ratios for ‘benefits’ or some small form of currency. Monthly fees will be paid to copyright holders based on their files’ popularity over the payment period, making it a merit based system. People may scoff or freak out, but this is the same system we have for cable television. Or internet subscriptions. Or satellite radio. Or libraries (well, mostly). It isn’t revolutionary, its a natural progression. The music industry (artists, etc) will be outsourcing its hosting efforts to its consumers, and be getting paid for it. Consumers can pick the right package for them, maybe they want infinite downloads at a higher price, or a set amount of data. There are plenty of ways to make the system work well, but no one wants to take a chance.

P2P technology isn’t running away, and neither are the music industry’s problems. Piracy is a natural approach to filling society’s unmet needs. Fill the needs with a system that cuts production costs, turns your consumers into producers and distributors, and gain back a significant market share.

Just when pirates thought that leaving the open sea meant safety.

Cultural [Re]Mediation: Evolving Reading Practices in the Digital Humanities – by “Brendan S”

I thought I would use this week’s blog post to give a rundown on some of the things I learned this past weekend at the “Past’s Digital Presence” Conference on the digital humanities which took place at Yale. I was lucky enough to be there all day recording video of the conference, and got to see almost a dozen presentations; several of them are, I think, quite relevant to themes we’ve been discussing in this course.

On a broad level, the conference was concerned with questions such as access to information, methods of sharing and analyzing culture, and our evolving roles – as researchers, librarians, readers – in consuming via these new methods. I’ll focus here on two of the talks, both of which fell under the session on “Evolving Reading Practices,” and dealt with how new technologies are altering how we interact with creative works via the Internet – particularly relevant to the topic of gatekeeping and online intermediaries,

Patrick Redding – “Viral Meters: Reading Frank O’Hara on YouTube”

In this talk, Redding talks about how digital media can blur the distinction between primary and secondary sources. Digital texts are mobile, and mutate frequently; we frequently interact with digital facsimiles and annotated hypertexts. In many cases, in fact, our initial interaction with a text is not in the traditional form of a book, paper, or film, but rather with text as multimedia hybrid. He illustrates these points through an analysis of various adaptations of and responses to Frank O’Hara on YouTube, presenting us with new ways of imagining traditional poetic concepts.

Redding presents five videos as examples. The first is a high school teacher’s take on “Having a Coke With You,” in which the teacher has inserted Google-imaged photos in order to provide students with visual context for concepts in O’Hara’s poem that might be unfamiliar.

The second, “As Planned,” incorporates music, type, and motion graphics to create not only a visualization of the poem, but also a visual reading, interpreting aspects of the poem such as its tone and syntax through visual forms. Third is “Lana Turner Has Collapsed,” a poetic riff on the mashup genre, set to a Nirvana song (a quick YouTube search turned up one with Madvillain as well). The last two are produced by independent filmmaker Joseph Fusco, and provide two very different interpretations of the same poem; the narrator, background music and visuals actually change the meaning of the piece.

(Here are the other three.)

I would guess that most (if not all) of these videos, are technically in violation of at least one copyright, but none has enough views to merit bothering with a takedown notice; it’s likely anyway that the copyright owners either aren’t even aware that these videos exist, or recognize their utility and cultural significance. Since none of the videos are direct, unaltered copies of creative work, each adds a new level of interpretation or understanding to O’Hara’s poetry, and even the songs fall under a loose definition of fair use, I can’t see any incentive for a copyright holder to want one of the videos removed. However if YouTube (through some hypothetical new technology) were to start preemptively identifying every case of infringement, it’s more than likely that videos such as these would get taken down for no good reason.

Rachael Sullivan – “Dickinson Meets DoubleClick: Remediating Poetry”

Sullivan’s talk was particularly interesting to me because it looked at how texts (in this case, literature) are altered not only by how they’re displayed and accessed online, but also by their context. Sullivan elaborates on two aspects of literacy in the digital age – immediacy, the act of looking at the text; and hypermediacy, in which the text is mediated by external elements. One of the most prominent among these external influences is online advertising.

DoubleClick (now a Google subsidiary), Google’s own AdSense, and other services provide targeted advertisements to many websites, including online repositories of literature and poetry such as Bartleby.com. The challenge that arises when looking at poetry online in this context is sifting through the “noise” that distracts from the text. In addition to advertisements surrounding a poem, for example, and influencing its reading, advertisements are sometimes even hyperlinked to words within the text itself, creating a potentially unreliable version of the text. Of course, this is not an altogether new issue – there are discrepancies in manuscripts vs. typeset poems, and a history of advertisements or other contextual influences in printed texts. But with the proliferation of digital media we’re seeing an exponential expansion of possibilities for remediating a text; with everything hyperlinked and searchable, practically nothing exists in isolation anymore.

The omnipresence of Google, through which it sometimes feels the whole Internet is filtered, raises many questions about the influence of online advertising in how we navigate and consume content. Advertising is of course Google’s bread and butter; everything from run of the mill search results to YouTube videos to maps and news are dished up to us alongside a healthy slough of linked advertising. I’m not sure to what extent this actually influence our online behavior, given that we’re so desensitized to ads that we hardly notice them anymore. Besides, Google’s ads are about as innocuous as they come, a far cry from pop-up cascades and insidious spyware.

Sullivan may appear to be overstating the significance of this particular way of remediating texts, but the questions she raises are worth thinking about, if only to develop a more conscious awareness of the Internet’s effects on how we read. The texts in question aren’t just poetry, but everything we interact with online – and the mediating elements are not confined to ads, but include complicated systems of hyperlinks connecting information in an untold number of ways. Intermediaries such as Google have a profound impact on how we navigate the web, not only through advertising, but most fundamentally through the search and ranking algorithms themselves.

Stealth Mode is Stupid: Why Your Ideas Don’t Matter – by “Michael W”

It’s a longstanding cliche in the world of tech start-ups. “I’d love to chat about my company, but we’re in stealth mode.” The concern is that sharing the idea is more dangerous than not sharing it. In my experience I have found the exact opposite to be true. Stealth mode is stupid for at least three reasons: 1) ideas are overrated, 2) execution is infinitely more important, and 3) freely sharing ideas can aid in their execution. This is an essential lesson for tech start-ups, but its implications reach far beyond Silicon Valley.

Ideas are Overrated

To start with, ideas are painfully overvalued, both anecdotally — by aspiring entrepreneurs, and formally — by our legal system. Right now thousands of people are contemplating the same, next big idea. But what separates these faceless masses from the one that will emerge as the next Google? In a word, execution. Ideas are everywhere, but great implementation is rare. New entrepreneurs, who have not yet gone through the most critical stage of a young company — its execution — are prone to undervaluing its importance.

The US patent system, meanwhile, similarly overvalues ideas. It protects the expression of ideas that are both “novel” and “non-obvious,” but realistically, in the digital age, for how long do new ideas remain “non-obvious”? In the Twitter age ideas spread nearly instantly. And because of our abundant access to information, in general, the process of trends converging to form new ideas is in plain view for almost anyone to see. Furthermore, the ideas that underly the most successful tech companies of the past decade — Google, YouTube, and Facebook — were neither novel nor non-obvious when they made their marks.

The Story of Facebook

Facebook, in particular, provides an excellent case study. The idea of social networking first emerged in the late 90’s. Live Journal started in 1999; Friendster in 2002; and Tribe.net in 2003. Mark Zuckerberg didn’t launch Facebook until the spring of 2004. At that point it would be unthinkable to label social networking as a new idea. But it was. In fact, two separate groups claimed that Zuckerberg had stolen the idea from them. Facebook had to settle one of the cases out of court (due to pressures stemming from contract law and public relations, not any valid IP concerns), but the very occurrence of the lawsuit, that someone could even think that the idea of social networking was somehow novel or non-obvious in 2003, underscores our societal misunderstanding of ideas.

Why did Facebook garner 400 million users, then, even though it wasn’t a new idea? Because of its execution. It was part luck, part skill, but regardless, it was the actualization of Facebook, not the idea of a social network (or even the idea of a college-centric social network), which created so much value. The same goes for every success story. Search was old news by the time Google entered onto the scene in 1997. But they implemented it much, much better than the competition. Hundreds of streaming video sites were sprouting up in 2004. But YouTube executed the idea better than anyone else.

And why were so many people working on these ideas in the first place? Because there were highly visible trends that were converging to create obvious new opportunities: the growth of the internet made search a necessity; increasing broadband penetration made internet video feasible; and in the wake of the success of the blogosphere, social media was emerging as the next major frontier on the web.

“Ideas are Just a Multiplier of Execution”

As the founder of CD Baby, Derek Sivers, put it, “ideas are just a multiplier of execution.”  He explains that varying degrees of execution are worth roughly between $1 and $10,000,000, but ideas are only worth between negative 1 and 20. Therefore, a weak idea with flawless execution can be worth $10,000,000, but the best idea in the world with poor execution is worth just $20. These numbers are obviously metaphorical proxies, but the concept is spot-on. And Sivers of all people would know: he took a relatively boring idea (selling independently-produced CD’s on the Internet), and turned it into a $20 million company.

If stealth mode was merely unhelpful it would be one thing, but it is actively harmful to new ventures. The people who appear most threatening in the stealth mode worldview — industry peers, talented coders, angel investors, etc. — are actually the people who could provide the most help. By closing themselves off to these potential resources, stealth mode companies are their own worst enemies.

What about Apple?

One common retort to this critique of stealth mode is, “what about Apple?” This of course refers to the fact that Apple, Inc., the fifth largest company in the US, uses intense secrecy as part of their unquestionably successful product development and marketing efforts. The short answer is: you’re not Apple.  They are a thirty-five year-old company with hundreds of retail locations, tens of thousands of employees, and tens of billions of dollars in the bank. Their sophisticated use of secrecy has no bearing whatsoever on a small start-up. [Note: this isn’t to suggest that Apple has a healthy attitude towards intellectual property, because I don’t think they do, but that is for a different blog post.]

Fear of Sharing: Broader Implications

The concept that overprotecting ideas can actively hurt companies is something that applies to all firms, not just start-ups. Media conglomerates, for instance, closely guard their content, because, like rookie entrepreneurs, they think not sharing it is less dangerous than sharing it. But they’re wrong.

This mistake is perhaps best illustrated by the band Ok Go, whose lead singer wrote a scathing op-ed in the NY Times this past weekend, which chronicled his band’s tumultuous experience with a major record label. Ok Go was signed by EMI in 2000. They floundered for years, until in 2005 the band used their own funds to make a low-budget music video — without the aid nor the permission of their label — that went on to become a YouTube sensation. The label, though, viewed the video as illegal, despite the fact that it singlehandedly propelled the band to international stardom, resulted in millions of legally sold records (most of the profits of which went to the label), and even earned the band a Grammy. Recently EMI disabled embedding on this video so that it can no longer be shared across the Internet, even in light of how it being shared in the first place is precisely what proved to be such a boon for the band and the label. Consequently, EMI is preventing the next Ok Go from ever emerging. Consumers lose, bands lose, and EMI loses. Why are they doing it? It’s really unclear.

Conclusion

Whether you’re a lone hacker or a Fortune 500 media company: your ideas don’t really matter. So stop trying to protect them, and start trying to implement them better.

Friend or Foe – by “Reynolds H”

Snake

The main issue with the myriad of user generated content sites(Youtube, Vimeo, etc) are that they offer a highly efficient network for the use and spread of copyright infringing content, obviously. The entire Viacom vs. Youtube lawsuit is based upon Viacom’s belief that Youtube, as a business, benefits from the availability and use of infringing content on the website. Viacom is pursuing Youtube on the basis that they are failing to adhere to the criteria for protecting under Act § 512, complaining that Youtube is neither “act[ing] expeditiously to remove, or disable access to, the material” nor “not receiving a financial benefit directly attributable to the infringing activity”. Viacom claims that Youtube has not put enough effort and technology into making sure that unlicensed copyrighted content is removed from the site. That is why, in the midst of a legal battle, the timing of Youtube’s revelation of the beta version of their new Video and Audio Identification software is so beautifully bratty.

Like a stubborn middle schooler, who only wants to do it their way, Youtube, after being sued, is taking a step toward efficiently handling infringing content. However, they want to make it know that they don’t have to do it. In a recent blog post, Youtube product manager, David King stated that the new Video and Audio Identification “goes above and beyond our legal responsibilities”. But like a stubborn middle schooler, Youtube’s creation may prove to further the reach of the copyright “take-down” abusers, they are currently fighting in court.

Youtube’s Video Identification uses copyright owner submitted material as a template for an automated system which matches the audio and/or video to user uploaded content. When matches are found, copyright owners have the option to block that content, track that content, or monetize the content. Youtube has essentially given greedy copyright owners an easy system to enact frivolous takedown’s on any videos that have any semblance to their copyrighted content. The idea of the software is justifiable and obviously the monetary motivation is at the heart of the system., Video Identification allows for copyright owners to, in effect, license their content to Youtube for monetary return. The system also allows Youtube to take a backseat to takedown claims, giving full onus to the system and the copyright owner. However this passive stance provides the opportunity for the unbridled abuse of the system. With what seems like little room for appeal from the user, the software deals a serious blow to any petitions of fair use. At the heart of the Lenz v. Universal case was how Universal was not acting in “good faith” and did not consider the application of fair use when it issued a takedown notice for her “use” of Prince’s “Lets go crazy”. This new software may take takedowns to an even further level. Imagine the potential of this software. What if a video is posted of a car ride conversation, which happens to involve the background car radio playing a famous song. This software potentially has the power to take match that background song, which has nothing to do with the video itself, and issue a takedown at the will of the copyright holder. Takedowns could begin to be issued even more frivolously than ever before, all while allowing the full burden of “good faith” and fair use consideration to rest on the shoulders of Youtube and its new software.

It will be interesting to see how Youtube chooses to implement this new technology. According to its website, this new software will make money, increase fan interaction, reduce infringement, and provide market data. With the rapid increase in ads, iTunes plugs, increase in competition, and a deal with Universal resulting in the creation of Vevo (Youtube’s music video channel), hopefully money making is still second to the users on Youtube’s priority list, but if not this may be the end of Youtube’s reputation as a place to “Broadcast Yourself”, hurting itself in the long run.

Insurance hikes, privacy risks, for social media users – by “Jacob A”

The Huffington Post recently reported on an prediction made by the website confused.com, which helps insurance payers navigate and compare different rates, about a probable rise in insurance premiums for social media users. Why would social media users see home insurance hikes? Because the status-updates and other information they furnish on the social media services they use, such as Twitter or Foursquare, alert burglars as to when they’re home, and when they’re not.

http://www.csmonitor.com/var/ezflow_site/storage/images/media/images/0217_pleaserobme/7409270-1-eng-US/0217_PleaseRobMe_full_380.jpg

Tweeting something as benign as “great tilapia tacos @ Drew’s Taco Shack” is potentially unsafe as it alerts burglars to the fact that whoever is currently eating a taco with Drew is also not home. A new wesbite, pleaserobme.com, hopes to increase awareness about the dangers of publicly providing too much information, so it collects tweets and Facebook status updates and displays them to the world for anyone to see.

Pleaserobme.com means to make people realize the dangers of constantly updating and disseminating their location at all times, but it does so by letting robbers know when you’re not home, which is, although an admittedly pretty funny way of getting thoughtless social media users to think twice before tweeting “I’m I’m at Cali Yogurt,” also a lawsuit waiting to happen.

That said, it really is easy enough to find out where a sizeable chunk of the population lives by using Google’s phone number look up on a number or address. Consider also the enormous amounts of information provided by Google Earth or Streetview, and the extent to which digital technologies empower house robbers (or identity-thieves or other poorly intentioned individuals) becomes abundantly clear.

But even admitting that “criminals are becoming increasingly sophisticated in their information gathering… to plan their burglaries with military precision,” as Darren Black, the head of home insurance at confused.com, has pointed out – does this justify insurance hikes? What standards of burglar-sensitive stupidity (e.g. “Oh no! In Mexico for three days & think I forgot to turn the heat off at home!”) will insurance providers use? How does one gauge burglary (or other) risks from a tweet or Facebook status update?  And isn’t the very purpose and function of social media to disseminate opinions, constantly updated personal information, microstatements about daily life and wherabouts? Insurance hikes might make sense if there is an actual increase in risk because of social media use, but they also go against the nature of these services. The Huffington Post article refers to a news clipping from 1983 warning telephone users about the dangers of voicemail. “If you have an answering machine that tells callers you are not at home it could alert potential burglars, advises Family Circle magazine.”

http://images.huffingtonpost.com/gen/143131/PHONE-BURGLARS.jpg

Isn’t the fear over the disclosure of too much information via Twitter et. al. unsubstantiated, given the fact that a great portion of tweets are sent via mobile (and hence out of the home) anyway? Wouldn’t home insurance hikes for social media users be just as silly as if they were applied to phone-owners who didn’t change their voicemail?

But the kinds of information we propagate online through 4square and Facebook and Twitter also point to the kind of information ecology we would like to live in. Sure, it’s easy enough to make your entire Facebook page private – but was privacy ever the point of social media? Is privacy, as Mark Zuckerberg (in)famously recently stated, “no longer a social norm.”

But, then, where does one draw the line between stupidity and paranoia? There are undeniable dangers to giving away too much of our privacy, but what might those be? Is it more reasonable to be worried about burglars robbing your home, or about the larger privacy or security implications of geolocative (social) media? Shouldn’t we be more concerned about national security compromises that arise when the heads of national intelligence disclose too much information online? For the average person, disclosing too much information may not be a security risk, but it certainly remains a privacy risk. Forget robbers – what about data trawllers, or hostile intelligence networks, or government agencies, or corporate interests, who amass our geolocative (and all our other) social media information?

By default, Facebook makes you publically searchable by everyone, and publically visible by everyone in your networks. Default settings go more often unchanged then not. Privacy is not a default setting. The question is whether it still remains a social standard.

Social media is still a new technology. It will have direct implications on things such as hikes in home insurance premiums, as well as much larger cultural consequences. Is a social media universe where it is considered unsafe to post birthdates, pets names, phone numbers, photos a friendly one? A social one? We may have to value privacy and friendliness against each other.