The Economy of Exclusivity – by “Heather R”

Publish or perish. Academics need to continually publish work to keep their jobs. Universities evaluate their faculty based on how often they publish work and, often more importantly, where they publish that work. In each field there is a hierarchy of journal prestige, which is used by universities to evaluate the work of their faculty. Ideally the university would also review the work itself, but it is much easier to trust the peer review process of prestigious journals. This system can be side stepped of course, like when Grigori Perelmann published his proof of the Poincaré conjecture online and later refused the Field’s Medal. Clearly, prestigious journals are not a necessary component to groundbreaking research. Although it is possible to publish online without dealing with journals, most academics don’t. Most academics use the journals precisely because they are prestigious.

So why are these journals prestigious? In most cases they are very old publications with a history of publishing important research. Assuming that the prestige of the journals is merely a result of reputation within a community, there is no reason why these publications cannot be moved online and made available to the public.

The journals will of course resist this move, because they make a lot of money on subscription fees. If the journals won’t move online and become open access, then perhaps academics should abandon those journals completely.

Of course academics can’t abandon the publication system because they need the recognition of those prestigious journals. There are respectable online journal options, especially for developing fields, but it is more difficult to develop online, open access journals that need to compete with an existing journal. The online journal will always be seen as less valuable because if it is free and open to the public. If the work is worthwhile, then why is it being given away?

The idea that something is more valuable if it is expensive or exclusive is an element of human nature. In 1944, C.S. Lewis delivered a speech to King’s College entitled “The Inner Ring”. The inner ring is that ever elusive group of people that are cooler or smarter or more informed than we are. Some inner rings, like the cool kids at the lunch table, don’t serve any purpose other than to make their member feel superior. Other inner rings, like a group of respected academics, seem more justified. Journal publication is a fine way to increase one’s prestige within the academic community. There is nothing evil about academics pursuing prestige and respect, but the fruits of their work should not be confined to an inner circle. The research published in those journals should be available to everyone. Knowledge should not be confined to an inner circle. Restricting knowledge to those people who have the means to pay for it reinforces economic and intellectual divisions in society.

Social inner circles will never be eradicated. People will never stop trying to distinguish themselves from their peers. This may or may not be a productive element of society, but it’s not going anywhere. We can’t remove social inner circles, but we can eradicate economic inner circles that make information unavailable to those left outside.

Project: FicBound – by “Eric F”

While fanfiction has been around for decades, the Internet has provided a new gathering place for fans to share their passion and creativity with others.  As we have seen in class, remix culture is entering into mainstream consciousness and “remix literature” will undoubtedly begin to play a bigger part as well.  However, because fan work violates the current laws protecting copyright and has been subject to cease-and-desist, the fan community has an uneasy relationship with publishers and original copyright owners.

In the past, fans often congregated around centralized archives, such as  Currently, much of fan activity has moved to LiveJournal and LiveJournal communities.  While great for authors, allowing them to have better control over their content, it has lead to the decentralization of fanfiction and an increasing difficulty of navigating each community.

Our solution, FicBound, would serve to provide a platform for discovering new fanfiction to those new to the community, as well as provide the community tools for rating and organizing fanfiction.  Structured much like Digg, FicBound would be adapted to the fanfiction community’s needs, thus integrating social as well as publishing and sharing elements.  The content itself would remain on the LiveJournal and archive accounts of the users, thus allowing them to maintain full control.

FicBound hopes to build on Clay Shirky’s insight that “Conversation is king.  Content is just something you talk about.” While recognizing the difficult issues surrounding fanfiction, we nonetheless hope to build an useful platform to facilitate conversations around content for fans.

For those who wish to view our paper in full, please go to:

– Eric and Crystal

Project: Collaborative Education – by “Aditya K”

The internet has made it easier to access, share, and create content. This truism has huge implications for education; with access to the Web, educators and students have escaped the confines of the classroom. The ability to share lesson plans, collaborate on lessons, and peer produce projects is exciting—but with it brings many potential effectiveness issues, infrastructural issues, and copyright issues.

We interviewed teachers at Amistad, a New Haven charter school, regarding their thoughts on collaborative education and fair use, and gleaned interesting insights. After examining their comments and doing some research, we broke down collaborative education into three main systems, and analyzed each: An intra-school-network system of sharing lesson plans; a system where teachers sell lesson plans; and a system where teachers share lesson plans freely. Each system had quite a few pros and cons.

Alongside Nick Bramble at the Yale ISP, we submitted these analyses to the FCC. We also worked with Nick to help draft a piece addressing fair use issues in education. With new technologies and no clear rules, fair use in education is a topic that must be addressed and made clearer. With regards to our project, sharing materials and lesson plans online creates a vibrant atmosphere that, unfortunately, is setting itself up for abuse/lawsuits. If educational copyright issues aren’t made clearer, the potential that this peer-to-peer atmosphere creates may be stifled.

To see some of our contributions, check out this document (.doc).

Mellon Forum Project – by “Evin M”


Yale has always been committed to the open dissemination of knowledge, and the university has used begun to use recent advances in technology to distribute information easily and efficiently — for example, through open courses Yale is freely making many of its classes available online. However, as it stands professors are the primary benefactors of this technology, and students don’t have similar institutional support to share their own important work with the world. To further the goal of dissemination of knowledge as well as to allow students to participate in this global forum, we started a project to record videos of senior presentations in the twelve college’s Mellon Forums and then make them freely available online.

The Mellon Forums provide seniors with an intimate setting to present their thesis work to a group of their peers over dinner and dessert.  These forums are already an important platform for seniors to present their thesis research by giving them time to present and openly share their work with their classmates — otherwise most seniors’ research would be mostly hidden from view. One limitation of this, however, is that by their nature the forums must be small and presentations are given to a very restricted audience despite the fact that many people (friends from other colleges and years, family, and curious outsiders) are particularly interested in the work these students are doing. Putting videos of these presentations online would not only make this knowledge available not only to other students, but also to all the world. Taping a presentation might make it slightly more formal, but it also heightens the energy and impact of the presentation. And further, recording these presentations sends the message to students that yes, their research really is important enough that the university believes it is worth showing to the world.

Check out our site here. We’ll continue updating it!

-Evin, Paul, & Paulo

New Business Models for News – by “Max C”

“This is a case of something close to what economists call market failure: Something is deemed important, but there isn’t enough of an incentive for the private sector – the market – to provide it on a broadly democratic basis.” —Ralph Whitehead, Jr., The Boston Globe

It’s no secret that journalism has fallen into a bad way. When the President of the United States takes the time to express concern about something, it’s probably worth noting. And while he notes that professional, investigative reporting is “absolutely critical to the health of our democracy,” he seems unsure that it will remain intact in this capricious modern age of new media. Various exciting, promising offshoots of journalism have appeared and begun to flourish thanks to the Internet — such as its citizen and social media derivatives — but as Rupert Murdoch noted at an FTC conference last week, “Good journalism is an expensive commodity.” And it is one that the World Wide Web has left largely without a financial platform to support it.

That’s why the discussion of new, innovative business models for journalism has become essential. And thankfully, that discussion has been happening — and continues to happen every day — in the blogosphere, in the editorial columns of the world, and on fantastic websites like CUNY’s News Innovation resource. But to our knowledge, no one web location has attempted to condense and distill all that discussion and information into an easily digestible, comprehensive format. It is with that in mind that we established this website.

We have set up a repository of lengthy, informative posts on what we think to be the eight main models that are the most discussed and pursued right now. In each analysis, we describe the model and the innovation that led to its birth, businesses and entrepreneurs who have pursued these models (and to what success), the probable future of the model, and the financial role we think the model is most likely to play in the long run. We also have a cache of Supplements, which includes other journalism/business model-related pieces we have written recently from conferences and various assignments.

We hope that you learn as much reading all of this information as we did in compiling it. Many thanks!

-Jakob Dorof, Sam Duboff, and Max Cutler

Website: New Business Models for Journalism

High School Education Project – by “Anna L”

Alex and I went to Hillhouse High School to teach students who were in an educational law class. We designed the lesson to be discussion oriented, and we tried to focus on issues we have looked at over the course of the semester that affect them—like Facebook, YouTube, Wikipedia, and fair use. It went really well, and we think that they came out of the class with a better understanding of important issues they might not have considered before. It was also a very eye-opening experience for Alex and me. We had to simplify what we had planned to say, because they had less background knowledge than we had expected. Most of the students had never used Wikipedia, and only some of them had a Facebook account.

Moving Beyond the Story, News’ Value in Data – by “Max C”

Linear story replaced with database

The age of the linear news story must end if existing news organizations hope to survive in the digital age. For as long as print news publications have been around, the story has been the atomic unit of news coverage. A reporter collects information, attends events, does research, and produces a textual article that consumers read passively. But the consumption patterns of the modern digital citizen have outpaced such linear representations, and old media organizations must adapt to this changing reality.

“Old media” news organizations are now struggling to find new sources of revenue to buoy their ancient production models as advertising prove insufficient. Numerous new business models, such as paywalls or non-profit news organizations, and new content strategies, such as hyperlocal coverage or citizen journalism, have been proposed. As Dan Conover explains in his blog post “The Imagination Gap,” the problem with these approaches is that they are limited by the imagination of people from the old system. The merits of each new business model or strategy have been debated elsewhere, but Conover posits that the solution is a fundamental shift in what news organizations produce.

News, in the form of prose about current events, is now ubiquitous on the internet: between traditional news outlets, blogs, Twitter, Facebook, and others, there will never be a shortage of discussion about news. There is no paucity of facts or information on the internet, often captured in stories or comment threads or wikis. What’s missing, and what is potentially most valuable, are structured, easily accessible databases. News organizations can create considerable value by not only collecting facts into stories, but creating databases that allow searching, aggregation, cross-referencing, and other data mining techniques so that people can draw their own conclusions and use the data for their own purposes.

Newspaper websites have had a difficult job monetizing their archives online, despite the fact that there is clearly an enormous wealth of information contained within them. The problem is that, without having them in a structured format, actually extracting or finding information contained within the archives is nearly impossible. Metadata is just as important, if not more important, than the story itself. Give away the story for free–the facts are ubiquitous– but sell the structured data that people can actually use in more meaningful ways.

To affect such change, new standardized data formats and tools must be developed and adopted so that information from multiple sources can be combined and cross-referenced. The principles of open access come into play, where the creation of data silos must be avoided at all costs. That is not to say that access must be free, but if a consumer has access to multiple databases, he or she must be able to mash them together in any way imaginable.

Projects such as DocumentCloud are making important first steps towards this goal. Sites such as EveryBlock aggregate information and present it in an easily accessible manner. Even some of the large newspapers have been experimenting with opening their stores of data: the New York Times and the Guardian have created  publicly-accessible APIs. Standards for news story markup have been proposed, such as hNews, which are relatively easy to integrate into existing systems and site designs. More such efforts will, and must, continue to come; the question is whether they will be by existing news organizations or start-ups.


Newspapers have been lamenting the fact that people will not pay for their content, and are seeking ways to “fix” that problem. What they see as a problem with consumers is actually a problem with the product they are producing; consumers simply want information, in the format of their choosing, and will take it from whatever source is most convenient and accessible. Reporters are already skilled at collecting such information, so it is a matter of adjusting workflows and creating the requisite technical tools and systems.

When the only means of acquiring data was a sheet of paper, linear stories were an acceptable way of conveying information. If news organizations ignore the revolutionary abilities of the internet and computer software to store and present structured information, they do so at their own peril.

The Game 2.0 : Open Access, Harvard vs. Yale – by “Paul R”

To start off, I’d first like to point to a short article that I wrote for the Herald just a few weeks ago advocating for open access policies at Yale. Geared toward a general audience, the article discusses open access at Yale especially in light of Harvard’s recent mandate requiring its faculty to upload all of their scholarly work to their new online repository. To follow up, I thought I’d use this space to detail more of what I’ve learned in terms of recent advances in open access at Yale and other universities.

Harvard has been the leader in the open access movement — on February 12, 2008, after months of discussion with the Faculty of Arts and Sciences, the faculty unanimously voted to mandate open access for scholarly work. The full text of the proposal can be read here and you can find an overview at their website here but in short, the faculty basically agreed to allow the university to make their work available online in a digital access repository (which as you can see already contains almost two thousand articles). The requirement of making this work available online does not, however, mean the author cannot publish in a traditional journal. Indeed, many do — this form of open access where a university self-archives but still allows traditional publication is usually referred to as “Green Open Access.” Finally, just to be clear, the system does allow faculty to waive the open access requirement upon request as long as the faculty member explains the need.

But the digital repository (Green OA) is just one part of the equation that provides for open access on the university side. As is detailed in our reading for today, the other part is publication in open access journals, otherwise known as “Gold OA”  (a directory of these journals can be found here). This form of open access is much harder to sustain because it requires funding for authors to pay for processing fees in OA journals. To this end, Harvard also created a fund, called the Harvard Open Access Publishing Equity (HOPE) fund to reimburse Harvard authors who aren’t grant-funded for processing fees in open-access journals. The fund gives reimbursement up to $3,000 (although there are currently no known journals that charge that much. The average amount of processing fees is about $1,000).

Harvard’s leadership in open access is significant not only because it was the first university in the United States to mandate open access, but also because it provides a model for implementation. In the past, most institutions had been skeptical of the beauracracy and funding needed for open access (see, i.e., Waters, “Managing Digital Assets in Higher Education: An overview of Strategic Issues” in the February 2006 report of the ARL), but Harvard has shown that they can make it work.

After Harvard announced its open access policies, a handful of other universities have also adopted similar open access policies. In particular, Stuart Shieber, the main proponent of open access at Harvard and now the director of its Office for Scholarly Communication, established the Compact for Open-Access Publishing Equity which currently has five signatories: Cornell, Dartmouth, Harvard, MIT, and UC Berkeley. Outside of this compact, there are roughly fifty institutions that have adopted at least green open access policies, and a full list of them can be found here.

Where is Yale in all of this? Currently, Yale has no such open access policies. Last year, the organization Open Access at Yale  did interviews of seventeen faculty members and found that although there was some faculty interest, Open Access was something that was little discussed. I recently exchanged emails with Meg Bellinger, the director of the newly created Office of Digital Assets and Infrastructure, on the state of OA at Yale and she said that while open access was “the topic of much discussion,” currently most projects are focused on digitization, i.e. cultural heritage materials from Yale art museums. The closest thing that I’ve been able to find to Open Access at Yale is the new Digital Commons Repository created by the law school, but its still vastly underutilized (submissions from this year from faculty can be counted on one hand).

Obviously, Open Access is not something that can appear overnight. Stuart Shieber at Harvard has stressed the importance of faculty support for the issue, and this can only come with sustained advocacy and discussion. With time and effort, I hope that Yale too will soon join in on this global and open exchange of ideas.

Newscorp’s Price Barriers and The Survival of the Wall Street Journal – by “Alexander F”

Newscorps Head Honcho: Rupert Murdoch
Newscorp's Head Honcho: Rupert Murdoch

At a recent symposium in Washington DC, media mogul Rupert Murdoch told a room full leaders in academics, economics, and multimedia that “There’s no such thing as a free news story.” While as a blanket statement this is clearly blown far out of proportion, for big media, this is the sad but true reality.

The fact of the matter is that the vast majority of news available right now IS free for viewers to read. Besides the cost of perhaps an internet connection, a person can freely access news sources from around the world, and can frequently access free version of print news that actually costs money in the tangible realm. Mr. Murdoch is trying to change all of this though. In a recent announcement, Murdoch disclosed that a new arrangement is being made between Microsoft and one of Murdoch’s media properties, The Wall Street Journal. In this deal, Google would no longer be able to sift through the WSJ’s articles and Newscorp (The parent company to the WSJ of which Murdoch is at the powerful reigns of) would allow Mircosoft’s new search engine Bing to have the exclusive search rights to their content….for a fee.

Now Newscorp is no stranger to internet pricewalls. The Wall street Journal itself actually is probably one of the most famous cases of an online news service requiring a fee for full access to its articles. Interestingly though, on the WSJ’s website, only the first two paragraphs or so are visible for “paid” articles if you don’t have the subscription fees, and as of a few months ago, Google had a loophole. If you copied and pasted the URL of the article into Google, the article could be read in whole. Clearly this was fueling some distaste between Newscorp and Google, and although Google has recently updated its “free click” policies in order to put more power into the hands of publishers, it seems to have been just a little too late.

But why start putting in pricewalls and exclusive search rights? Arianna Huffington, also at attendance for this symposium, eagerly dismissed Newscorp’s search engine dealings. In fact, she was even quoted as saying that “Promiscuity is not a good thing in relationships but it’s a great thing in news,” during the course of the two day event. While all of my touchy feely side wants to agree with her, it’s a little irresponsible to jump entirely on that bandwagon.

Unlike The Huffington Post, Arianna’s internet news source, The Wall Street Journal for instance actually pays its writers. There is no way in the foreseeable future for the WSJ to survive on a selection of “citizen journalists” and celebrities who either want to promote social ideals or get some published recognition. To do some of the things that the WSJ does, it costs substantial amounts of money, and with online ads not being nearly as profitable for newspapers as perhaps a full-page ad in their printed counterparts, many are struggling to make sure that they can financially survive. The managing editor of the WSJ, Robert Thompson even said recently that for the Journal to make all of its content free online, he’d estimate that it would cost the jobs of around 300 reporters for the news source. With this constraint in play, it’s nearly unforeseeable for some news sources to continue to function online entirely for free.

So this leads to the burning question, how much of the internet’s journalistic core will succumb to forcing some fee to be charged for its service? It seems as if much of it will be determined through the nature of the reporting and its costs. Huffington Post’s celebrity commentary on global events costs next to nothing while the WSJ’s investigative dive into something like international money laundering may cost quite a pretty penny. Since we felt comfortable so long with paying for the print news before the internet, why should the internet, a communication tool, instantly make some previously costly content absolutely free? It’s a question burning on the minds of big media as it struggle to make ends meet, and the future associated with it may start to show that in some cases, there may not be such a thing as a free story.

Journalism’s New Wave – by “Jakob W”

September 30th marked the initial invitation-only release of the “Preview” beta of the new Google Wave service. Like Google News and Maps before it, Google Wave — a browser-based app that allows one to collaborate with contacts in real time through an AJAX-based document model — arrives on the scene in a developmental stage, with little explanation and largely left to the user to figure out how best to use — it’s even open source since, as Google VP of engineering Vic Gundotra readily admits, “frankly we need developers to help us finish this product.”

Ever since its unveiling (and as more invites are made available), many have speculated on what Google Wave could mean for journalism in the Internet age.

In the moments before even the first 100,000 invites were released, Mark Milian came up with a list of potential uses for the L.A. Times technology blog that could change journalism and the way we interact with it — and in ways more clearly beneficial than other recent mutations, such as the ever-controversial citizen journalism. Possibilities the Wave allows in its current form include live editing (so writers can watch editor changes and address questions/ambiguities as they develop), and real collaborative reporting (during the pen-and-pad era of journalism, having two or three writers to a story was often messy, and rather uncommon to date) and blogging (which, in its draft/publish/edit form, is similarly cumbersome to co-author). With just a couple small tweaks, Wave could be even more revolutionary: integrating Google Voice would allow for easy integration of recorded interviews, voicemails and text messages into story notes and archives; more transparent story update/correction timelines, reader observation and perhaps even participation during the writing process of a story (almost like a Wiki); live, paragraph-by-paragraph reader commentary and discussions; combining Google’s Polly extension, live mid-story polling of readership could be possible, et cetera. In essence, Google Wave could allow citizens to take a much more active role in professional journalism, instead of trying to compete with it via questionable Twitter feeds and rumor mill bullflop.

Jeff Jarvis of Buzzmachine, who believes Google Wave has potential to be “the new news,” provides a lucid example of how the above dynamic might develop:

Imagine a team of reporters – together with witnesses on the scene – able to contribute photos and news to the same Wave (formerly known as a story or a page). One can write up what is known; a witness can add facts from the scene and photos; an editor or reader can ask questions. And it is all contained under a single address – a permalink for the story.

As one Scott Blanchard comments on a post in the Wired Journalists Publish2 feed, Google Wave could be especially effective in particularly temporal forms of news updates, like weather and traffic reports. Crowdsourced, real-time collaborative reportage allows for a level of “man on the street” input previously infeasible in journalism.

And yet, these exciting possibilities remain just that. Time will tell how Google Wave’s technology truly impacts journalism — but all accounts thus far point to a very promising future for what many have recently written off as a dying art.

Further reading not linked above:
Google Wave the next social media phenomenon and journalistic tool? (Editors Weblog)
Exploring Google Wave – how could it transform journalism and publishing? (iTWire)
Periodistas 21 (en Español!)