New Business Models for News – by “Max C”

tweets
“This is a case of something close to what economists call market failure: Something is deemed important, but there isn’t enough of an incentive for the private sector – the market – to provide it on a broadly democratic basis.” —Ralph Whitehead, Jr., The Boston Globe

It’s no secret that journalism has fallen into a bad way. When the President of the United States takes the time to express concern about something, it’s probably worth noting. And while he notes that professional, investigative reporting is “absolutely critical to the health of our democracy,” he seems unsure that it will remain intact in this capricious modern age of new media. Various exciting, promising offshoots of journalism have appeared and begun to flourish thanks to the Internet — such as its citizen and social media derivatives — but as Rupert Murdoch noted at an FTC conference last week, “Good journalism is an expensive commodity.” And it is one that the World Wide Web has left largely without a financial platform to support it.

That’s why the discussion of new, innovative business models for journalism has become essential. And thankfully, that discussion has been happening — and continues to happen every day — in the blogosphere, in the editorial columns of the world, and on fantastic websites like CUNY’s News Innovation resource. But to our knowledge, no one web location has attempted to condense and distill all that discussion and information into an easily digestible, comprehensive format. It is with that in mind that we established this website.

We have set up a repository of lengthy, informative posts on what we think to be the eight main models that are the most discussed and pursued right now. In each analysis, we describe the model and the innovation that led to its birth, businesses and entrepreneurs who have pursued these models (and to what success), the probable future of the model, and the financial role we think the model is most likely to play in the long run. We also have a cache of Supplements, which includes other journalism/business model-related pieces we have written recently from conferences and various assignments.

We hope that you learn as much reading all of this information as we did in compiling it. Many thanks!

-Jakob Dorof, Sam Duboff, and Max Cutler

Website: New Business Models for Journalism

High School Education Project – by “Anna L”

Alex and I went to Hillhouse High School to teach students who were in an educational law class. We designed the lesson to be discussion oriented, and we tried to focus on issues we have looked at over the course of the semester that affect them—like Facebook, YouTube, Wikipedia, and fair use. It went really well, and we think that they came out of the class with a better understanding of important issues they might not have considered before. It was also a very eye-opening experience for Alex and me. We had to simplify what we had planned to say, because they had less background knowledge than we had expected. Most of the students had never used Wikipedia, and only some of them had a Facebook account.

Moving Beyond the Story, News’ Value in Data – by “Max C”

Linear story replaced with database

The age of the linear news story must end if existing news organizations hope to survive in the digital age. For as long as print news publications have been around, the story has been the atomic unit of news coverage. A reporter collects information, attends events, does research, and produces a textual article that consumers read passively. But the consumption patterns of the modern digital citizen have outpaced such linear representations, and old media organizations must adapt to this changing reality.

“Old media” news organizations are now struggling to find new sources of revenue to buoy their ancient production models as advertising prove insufficient. Numerous new business models, such as paywalls or non-profit news organizations, and new content strategies, such as hyperlocal coverage or citizen journalism, have been proposed. As Dan Conover explains in his blog post “The Imagination Gap,” the problem with these approaches is that they are limited by the imagination of people from the old system. The merits of each new business model or strategy have been debated elsewhere, but Conover posits that the solution is a fundamental shift in what news organizations produce.

News, in the form of prose about current events, is now ubiquitous on the internet: between traditional news outlets, blogs, Twitter, Facebook, and others, there will never be a shortage of discussion about news. There is no paucity of facts or information on the internet, often captured in stories or comment threads or wikis. What’s missing, and what is potentially most valuable, are structured, easily accessible databases. News organizations can create considerable value by not only collecting facts into stories, but creating databases that allow searching, aggregation, cross-referencing, and other data mining techniques so that people can draw their own conclusions and use the data for their own purposes.

Newspaper websites have had a difficult job monetizing their archives online, despite the fact that there is clearly an enormous wealth of information contained within them. The problem is that, without having them in a structured format, actually extracting or finding information contained within the archives is nearly impossible. Metadata is just as important, if not more important, than the story itself. Give away the story for free–the facts are ubiquitous– but sell the structured data that people can actually use in more meaningful ways.

To affect such change, new standardized data formats and tools must be developed and adopted so that information from multiple sources can be combined and cross-referenced. The principles of open access come into play, where the creation of data silos must be avoided at all costs. That is not to say that access must be free, but if a consumer has access to multiple databases, he or she must be able to mash them together in any way imaginable.

Projects such as DocumentCloud are making important first steps towards this goal. Sites such as EveryBlock aggregate information and present it in an easily accessible manner. Even some of the large newspapers have been experimenting with opening their stores of data: the New York Times and the Guardian have created  publicly-accessible APIs. Standards for news story markup have been proposed, such as hNews, which are relatively easy to integrate into existing systems and site designs. More such efforts will, and must, continue to come; the question is whether they will be by existing news organizations or start-ups.

***

Newspapers have been lamenting the fact that people will not pay for their content, and are seeking ways to “fix” that problem. What they see as a problem with consumers is actually a problem with the product they are producing; consumers simply want information, in the format of their choosing, and will take it from whatever source is most convenient and accessible. Reporters are already skilled at collecting such information, so it is a matter of adjusting workflows and creating the requisite technical tools and systems.

When the only means of acquiring data was a sheet of paper, linear stories were an acceptable way of conveying information. If news organizations ignore the revolutionary abilities of the internet and computer software to store and present structured information, they do so at their own peril.

The Game 2.0 : Open Access, Harvard vs. Yale – by “Paul R”

To start off, I’d first like to point to a short article that I wrote for the Herald just a few weeks ago advocating for open access policies at Yale. Geared toward a general audience, the article discusses open access at Yale especially in light of Harvard’s recent mandate requiring its faculty to upload all of their scholarly work to their new online repository. To follow up, I thought I’d use this space to detail more of what I’ve learned in terms of recent advances in open access at Yale and other universities.

Harvard has been the leader in the open access movement — on February 12, 2008, after months of discussion with the Faculty of Arts and Sciences, the faculty unanimously voted to mandate open access for scholarly work. The full text of the proposal can be read here and you can find an overview at their website here but in short, the faculty basically agreed to allow the university to make their work available online in a digital access repository (which as you can see already contains almost two thousand articles). The requirement of making this work available online does not, however, mean the author cannot publish in a traditional journal. Indeed, many do — this form of open access where a university self-archives but still allows traditional publication is usually referred to as “Green Open Access.” Finally, just to be clear, the system does allow faculty to waive the open access requirement upon request as long as the faculty member explains the need.

But the digital repository (Green OA) is just one part of the equation that provides for open access on the university side. As is detailed in our reading for today, the other part is publication in open access journals, otherwise known as “Gold OA”  (a directory of these journals can be found here). This form of open access is much harder to sustain because it requires funding for authors to pay for processing fees in OA journals. To this end, Harvard also created a fund, called the Harvard Open Access Publishing Equity (HOPE) fund to reimburse Harvard authors who aren’t grant-funded for processing fees in open-access journals. The fund gives reimbursement up to $3,000 (although there are currently no known journals that charge that much. The average amount of processing fees is about $1,000).

Harvard’s leadership in open access is significant not only because it was the first university in the United States to mandate open access, but also because it provides a model for implementation. In the past, most institutions had been skeptical of the beauracracy and funding needed for open access (see, i.e., Waters, “Managing Digital Assets in Higher Education: An overview of Strategic Issues” in the February 2006 report of the ARL), but Harvard has shown that they can make it work.

After Harvard announced its open access policies, a handful of other universities have also adopted similar open access policies. In particular, Stuart Shieber, the main proponent of open access at Harvard and now the director of its Office for Scholarly Communication, established the Compact for Open-Access Publishing Equity which currently has five signatories: Cornell, Dartmouth, Harvard, MIT, and UC Berkeley. Outside of this compact, there are roughly fifty institutions that have adopted at least green open access policies, and a full list of them can be found here.

Where is Yale in all of this? Currently, Yale has no such open access policies. Last year, the organization Open Access at Yale  did interviews of seventeen faculty members and found that although there was some faculty interest, Open Access was something that was little discussed. I recently exchanged emails with Meg Bellinger, the director of the newly created Office of Digital Assets and Infrastructure, on the state of OA at Yale and she said that while open access was “the topic of much discussion,” currently most projects are focused on digitization, i.e. cultural heritage materials from Yale art museums. The closest thing that I’ve been able to find to Open Access at Yale is the new Digital Commons Repository created by the law school, but its still vastly underutilized (submissions from this year from faculty can be counted on one hand).

Obviously, Open Access is not something that can appear overnight. Stuart Shieber at Harvard has stressed the importance of faculty support for the issue, and this can only come with sustained advocacy and discussion. With time and effort, I hope that Yale too will soon join in on this global and open exchange of ideas.

Newscorp’s Price Barriers and The Survival of the Wall Street Journal – by “Alexander F”

Newscorps Head Honcho: Rupert Murdoch
Newscorp's Head Honcho: Rupert Murdoch

At a recent symposium in Washington DC, media mogul Rupert Murdoch told a room full leaders in academics, economics, and multimedia that “There’s no such thing as a free news story.” While as a blanket statement this is clearly blown far out of proportion, for big media, this is the sad but true reality.

The fact of the matter is that the vast majority of news available right now IS free for viewers to read. Besides the cost of perhaps an internet connection, a person can freely access news sources from around the world, and can frequently access free version of print news that actually costs money in the tangible realm. Mr. Murdoch is trying to change all of this though. In a recent announcement, Murdoch disclosed that a new arrangement is being made between Microsoft and one of Murdoch’s media properties, The Wall Street Journal. In this deal, Google would no longer be able to sift through the WSJ’s articles and Newscorp (The parent company to the WSJ of which Murdoch is at the powerful reigns of) would allow Mircosoft’s new search engine Bing to have the exclusive search rights to their content….for a fee.

Now Newscorp is no stranger to internet pricewalls. The Wall street Journal itself actually is probably one of the most famous cases of an online news service requiring a fee for full access to its articles. Interestingly though, on the WSJ’s website, only the first two paragraphs or so are visible for “paid” articles if you don’t have the subscription fees, and as of a few months ago, Google had a loophole. If you copied and pasted the URL of the article into Google, the article could be read in whole. Clearly this was fueling some distaste between Newscorp and Google, and although Google has recently updated its “free click” policies in order to put more power into the hands of publishers, it seems to have been just a little too late.

But why start putting in pricewalls and exclusive search rights? Arianna Huffington, also at attendance for this symposium, eagerly dismissed Newscorp’s search engine dealings. In fact, she was even quoted as saying that “Promiscuity is not a good thing in relationships but it’s a great thing in news,” during the course of the two day event. While all of my touchy feely side wants to agree with her, it’s a little irresponsible to jump entirely on that bandwagon.

Unlike The Huffington Post, Arianna’s internet news source, The Wall Street Journal for instance actually pays its writers. There is no way in the foreseeable future for the WSJ to survive on a selection of “citizen journalists” and celebrities who either want to promote social ideals or get some published recognition. To do some of the things that the WSJ does, it costs substantial amounts of money, and with online ads not being nearly as profitable for newspapers as perhaps a full-page ad in their printed counterparts, many are struggling to make sure that they can financially survive. The managing editor of the WSJ, Robert Thompson even said recently that for the Journal to make all of its content free online, he’d estimate that it would cost the jobs of around 300 reporters for the news source. With this constraint in play, it’s nearly unforeseeable for some news sources to continue to function online entirely for free.

So this leads to the burning question, how much of the internet’s journalistic core will succumb to forcing some fee to be charged for its service? It seems as if much of it will be determined through the nature of the reporting and its costs. Huffington Post’s celebrity commentary on global events costs next to nothing while the WSJ’s investigative dive into something like international money laundering may cost quite a pretty penny. Since we felt comfortable so long with paying for the print news before the internet, why should the internet, a communication tool, instantly make some previously costly content absolutely free? It’s a question burning on the minds of big media as it struggle to make ends meet, and the future associated with it may start to show that in some cases, there may not be such a thing as a free story.

Journalism’s New Wave – by “Jakob W”

September 30th marked the initial invitation-only release of the “Preview” beta of the new Google Wave service. Like Google News and Maps before it, Google Wave — a browser-based app that allows one to collaborate with contacts in real time through an AJAX-based document model — arrives on the scene in a developmental stage, with little explanation and largely left to the user to figure out how best to use — it’s even open source since, as Google VP of engineering Vic Gundotra readily admits, “frankly we need developers to help us finish this product.”

Ever since its unveiling (and as more invites are made available), many have speculated on what Google Wave could mean for journalism in the Internet age.

In the moments before even the first 100,000 invites were released, Mark Milian came up with a list of potential uses for the L.A. Times technology blog that could change journalism and the way we interact with it — and in ways more clearly beneficial than other recent mutations, such as the ever-controversial citizen journalism. Possibilities the Wave allows in its current form include live editing (so writers can watch editor changes and address questions/ambiguities as they develop), and real collaborative reporting (during the pen-and-pad era of journalism, having two or three writers to a story was often messy, and rather uncommon to date) and blogging (which, in its draft/publish/edit form, is similarly cumbersome to co-author). With just a couple small tweaks, Wave could be even more revolutionary: integrating Google Voice would allow for easy integration of recorded interviews, voicemails and text messages into story notes and archives; more transparent story update/correction timelines, reader observation and perhaps even participation during the writing process of a story (almost like a Wiki); live, paragraph-by-paragraph reader commentary and discussions; combining Google’s Polly extension, live mid-story polling of readership could be possible, et cetera. In essence, Google Wave could allow citizens to take a much more active role in professional journalism, instead of trying to compete with it via questionable Twitter feeds and rumor mill bullflop.

Jeff Jarvis of Buzzmachine, who believes Google Wave has potential to be “the new news,” provides a lucid example of how the above dynamic might develop:

Imagine a team of reporters – together with witnesses on the scene – able to contribute photos and news to the same Wave (formerly known as a story or a page). One can write up what is known; a witness can add facts from the scene and photos; an editor or reader can ask questions. And it is all contained under a single address – a permalink for the story.

As one Scott Blanchard comments on a post in the Wired Journalists Publish2 feed, Google Wave could be especially effective in particularly temporal forms of news updates, like weather and traffic reports. Crowdsourced, real-time collaborative reportage allows for a level of “man on the street” input previously infeasible in journalism.

And yet, these exciting possibilities remain just that. Time will tell how Google Wave’s technology truly impacts journalism — but all accounts thus far point to a very promising future for what many have recently written off as a dying art.

Further reading not linked above:
Google Wave the next social media phenomenon and journalistic tool? (Editors Weblog)
Exploring Google Wave – how could it transform journalism and publishing? (iTWire)
Periodistas 21 (en Español!)

Apple v. Psystar and the undying EULA question – by “Michael M”

psystar

The first major battle in a case with potentially significant implications for the enforceability of EULAs has just been fought. The result was disappointing. Judge Alsup, a federal judge in the Northern District of California, just granted Apple summary judgment on its copyright claims in Apple v. Psystar. Apple, of course, is the major California computer company. Psystar is a small Florida company that gained notoriety in April of last year for their “Open Computer,” which optionally shipped with Mac OS X installed (a legally purchased, retail copy of the operating system was also included in the box). Apple was predictably displeased with Psystar for selling what are essentially Hackintoshes and sued the company in June of 2008. Other Mac clone manufacturers have popped up, but Psystar became the movement’s figurehead. Its litigation with Apple promised to test the legality of the cloning business model. That test has so far gone very poorly for the cloners.
Continue reading “Apple v. Psystar and the undying EULA question – by “Michael M””

How Do We Fight Bad EULAs? – by “Michael L”

Not actually about fighting bad EULAs, but cmon, xckd is always a good thing.
Not actually about fighting bad EULA's, but c'mon, xckd is always a good thing.

After doing this week’s reading, it’s easy to get the feeling that there’s little we can do to fight bad EULAs. And let’s be honest, there isn’t much–at least for the individual user. That said, recently there have been cases where popular services have changed their terms of service because of the public’s distaste for a few egregious terms within them. Remember earlier this year when Facebook changed it’s TOS to say that they kept the rights to your content even if you got rid of your account? People got mad, they complained, and Facebook caved and went back to it’s old TOS. Similarly, there was a situation last summer in which it appeared Google’s Chrome browser’s terms of service gave Google the rights to anything you sent through the browser–again, after people complained, it was changed. While these situations were hardly the same (it seems Google’s TOS problems were the result of a mistake, whereas Facebook’s seemed more deliberate), they share in common the fact that the problem was fixed after enough people complained about it. This of course isn’t an entirely satisfying solution, but it is good to know that if people get angry enough, companies do respond.

The other important step one should take as a consumer is to actually make some attempt to read agreements before clicking through them–even if it’s just a quick skim. While there’s not much you can do if you don’t like the terms (except perhaps give your business to someone else), at least you’ll be aware of them. And sometime’s you’ll be pleasantly surprised (I’m a big fan of Google’s affirmation of my intellectual property rights, something about which I never would have known if I didn’t read the terms). If you want to be extra vigilant, you could even check the EFF’s “TOSBack” site from time to time: it’s a site that tracks changes to various terms of service agreements (there’s even an RSS feed if you’re uber-nerdy). After all, someone’s got to notice harmful changes to these agreements in order for people can get angry about them.

Lastly, I’d be up for creating some sort of EULA hall of shame, much like the EFF’s DMCA takedown hall of shame. While there already seems to be a site that attempts to do this, it’s far from well done or thorough (check it out at http://www.eulahallofshame.com/). Such a site, if done well, would be useful in that it would draw attention to particularly bad abuses of licensing agreements. And, after all, ridiculing sketchy practices by companies is fun. Let me know in the comments if you’re interested.

In honor of Cory Doctorow, I’d like to end this blog post in the same way he has ended several of his about blog posts EULAs (and I can because Boing Boing uses a CC-BY-NC license, I’m giving him credit [Thanks Cory!], and I’m gonna go ahead and say this blog post is CC-BY-NC-SA, since I can’t seem to find a licence for the site as a whole):

READ CAREFULLY. By reading this blog, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Reading the Fine Print – by “Michael C”

Handcuffed by the EULA
"Handcuffed" by the EULA

A few weeks ago, the highly anticipated video game Borderlands was released on video game consoles. Many PC gamers were also eagerly awaiting the release, but were forced to wait a little longer when the PC version was delayed for a week. Despite the delay, some industrious gamers were able to purchase copies from stores selling the boxed PC version ahead of its release date. These gamers excitedly returned home, popped in the disc, and installed the game. However, when they tried to play, the online servers refused to authenticate their copies before the release date, so the game did not work.

Why did this happen? According to Hal Halpin of the Entertainment Consumers Association (ECA), unlike console copies of the game, where a purchaser is buying the actual product, purchasers of the PC game are buying a license. The disc that a gamer purchases is only a means of providing the game’s data, not a product in itself. In purchasing a license, one is restricted to the terms that the publisher includes in the End-User License Agreement, or EULA. In this case, the EULA states that the game requires activation, and that the activation will not be made available until the release date. By agreeing to the EULA, consumers are agreeing to these conditions.

Inevitably, early purchasers of the PC version of Borderlands were quite surprised that their copies were temporarily useless. According to 2K Games, the publisher of Borderlands, they shouldn’t have been—the restrictions on activation were outlined in the game’s EULA, presented to the player during the installation process. Then again, who actually reads the EULA? This particular case speaks to a larger problem involving EULAs. They tend to be long, complicated documents that most people skip right over in their eagerness to install and try out the new application or game that they’ve just purchased. However, these documents are important to the consumer, as they outline the terms of the legally binding license that the consumer is entering into by clicking “yes” (hence the term clickwrap).

Publishers should make these EULAs more user-friendly, by simplifying the language used in them, putting the most important parts at the top (where users are most likely to see them without scrolling), and possibly moving towards some type of standardized EULA. Fortunately, the ECA is working to push publishers to create more open, standardized EULAs. If the EULA is made more accessible to the average consumer, then he or she will be more likely to spend a few moments reading it to understand the terms of the license, rather than automatically hitting “yes” to avoid the complicated and lengthy document.

Technology is no match for laziness. – by “Michelle V”

Its totally a reasonable modern analogue.  Jefferson would have been all about crypto.
It's totally a reasonable modern analogue. Jefferson would have been all about crypto.

From the Oxford English Dictionary online:

cybersecurity n. – security relating to computer systems or the Internet, esp. that intended to protect against viruses or fraud.

Protecting yourself is hard. Nothing we do in the real world is ever perfectly safe and secure, and yet for some reason people expect that when information has been digitized, it should be safe. Maybe the thought is that if you can’t see what happens to your credit card information when you buy things online, then no one else can either. Wireless internet is almost like magic – you type things into your computer, click a button, and poof! The information just sort of whizzes through the air from your computer to someone else’s computer! Look, ma, no strings!

… Right?

Obviously not, but that’s the way most of us treat digital information. Since we don’t see what happens to it, we think that no one else can.

There have always been hackers. There will always be hackers. There exist protection measures we can take to protect our personal information, and they are pretty good. They are not, however, perfect. MD5, designed in 1991 by Ron Rivest, was one of the most widely-used cryptographic hash functions. A flaw was discovered in 1996, and then in 2004, further analysis revealed that it was much less collision-resistant than it had previously been thought to be. A weakness has also been discovered in SHA1, the other widely-used cryptographic hash function. Because of this, these hash functions are being phased out of use (MD5 is no longer used for protocols like digital signatures, which rely on low collision rates), and will be replaced with a similar function, SHA2. Currently, the National Institute of Standards and Technology is in the middle of selecting a new hash function (SHA3) that will replace SHA2 if – or really, when – its weaknesses are discovered. Similarly, AES (having replaced Triple-DES, which replaced DES) will eventually need to be replaced as its weaknesses are revealed and exploited.

Ultimately, there is no perfect technological solution to cybersecurity: we can implement all the provably correct programs we want, but this will not protect us from people standing behind us as we enter our private information. Even so, this doesn’t mean we can’t do anything. Just because someone might take a sledgehammer to your door doesn’t mean you shouldn’t lock it – there is no reason to grant people easy access to your belongings, whether physical or digital. Just because we can find strings that produce collisions doesn’t mean that it’s easy to find a collision for a specific string. Security (read, “cryptographic”) measures exist. Use them. Don’t be stupid. Sometimes bad things happen. Remember that secrecy does not equal security. And enable https on Gmail, lest Brad read your chat logs.