Twitter vs. Facebook, Open vs. Closed – by “Michael W”

The evolution of Facebook and Twitter has been a tale of two cities. Facebook valued privacy, and in turn, closed systems, for both its users and developers. Twitter, on the other hand, built its site on openness. As a result, Twitter has excelled in public information (like aggregating “real-time” trends, and enabling the Iranian election protests), while Facebook has remained the go-to platform for sharing and consuming private information (like personal photos, small group events, “likes”, etc.). Last week, Facebook founder, Mark Zuckerberg announced the company’s new Open Graph platform, which marks a substantial step towards openness for the social network, and could have a profound impact on the rest of the web.

To users, social networks are only as valuable as the number of people on them, multiplied by their willingness to share information. Twitter represents 100 million people who are freely sharing their thoughts with the rest of the world (you don’t need to be “following” someone or logged in to access >99% of tweets), but these thoughts are refined to 140 characters, and typically aren’t very intimate. Facebook, on the other hand, gives you no-holds-barred access to the personal information of your friends, but this group usually doesn’t exceed 1,000 people. And so we’ve ended up with two very different treasure troves of data: a little information from a lot of people in Twitter, and a lot of information from only a few people — from a given user’s perspective at least — in Facebook.

To developers who build third-party apps for these platforms, the difference between open and closed can have an even greater impact. From early on Twitter offered open, flexible, easy-to-use tools to interact with its service, and as a result, today more than 75% of Twitter’s traffic comes from 3rd-party apps. Facebook, in stark contrast, initially adopted a “walled-garden” approach, in which 3rd-party developers were constrained by strict rules, an inability to use Facebook data outside of the site itself, and a dizzying, closed programming interface that made it difficult to code anything for Facebook in the first place. Facebook’s Apps platform flopped (with one notable exception, the social gaming giant Zynga, which allegedly makes more money than Facebook, but that’s another story). The social network tried to play catch-up with a string of services that slowly increased openness at the expense of privacy: its Beacon program allowed the websites of big companies to interact with Facebook directly, but this ignited a barrage of privacy complaints and was shut down; then Facebook Connect enabled any website to replace its sign-in process with Facebook’s, but this didn’t gain particularly widespread adoption; and then last week Facebook finally figured it out.

Facebook’s new Open Graph platform could be the beginning of the end for Twitter. To date, Twitter has competed with Facebook by offering a more open platform, easier-to-use developer tools, and better social integration. With Open Graph, though, Facebook catches up in one fell swoop. Twitter will soon launch its new “Annotations” tool that will enable longer-format tweets more similar to status updates. While this may seem like a step forward, it is a risky move for Twitter, and a sign of the changing tides. At this time last year Facebook was trying to emulate Twitter: they streamlined their posting interface, highlighted the role of public status updates, and emphasized Facebook search. But now the tables have turned, and Twitter is becoming more like Facebook. I worry that for a company that rose to prominence purely on its simplicity and ease-of-use, ANY major feature additions (and Twitter has had virtually none since it started 4 years ago) risk undermining its own reason for existence.

I really hope that Twitter survives the Open Graph wave, because if it doesn’t, Facebook’s singular reign over the web could have some scary implications for users. I’m optimistic, though, because I think Twitter’s true potential — gleaning brand new insights from huge amounts of real-time data — hasn’t even begun to be harnessed.

Facebook’s Open Graph: Less Open, More Facebook – by “Michael W”

Facebook’s recently announced Open Graph platform adds a social layer of information to the websites you already use. When you go to a site that incorporates it (and assuming you’re logged into Facebook, which is a pretty good bet considering nearly 500 million people are) you can see whom of your friends have signed-up for the service, or read the article, or bought the shoes, or the downloaded the band’s album. This social context is a new and important step for the web.

Google has defined the past decade of the Internet with its subtle but revolutionary insight that the value of a web page is related to the number of other web pages that link to it (which is at the heart of its famed Page Rank algorithm). Facebook could define the next decade of the web with a related but critically different idea: that the value of a web page is related to how many of your friends link to it.

So what does this mean for the future of the web? Arguably, Facebook’s new Open Graph puts it in a position to become the single most dominant platform on the Internet, and on a scale that we’ve never before seen. With 5 times more users than Twitter, access to deeper and more private information, and now, it’s open policy that enables painless deep-integration with third-party sites, Facebook could become the default standard of the Internet. One could imagine a world in which every sign-in process, every comment, every search, even, funnels through Facebook’s back-end.

This future, though, has frightening implications for privacy and competition. What has made the Internet so great in the first place is that it is the most open platform ever. It was set up without an official central governing body, and with no critical closed standards, so that anyone could use its simple tools to broadcast their information to the world. Facebook’s potential role as a backbone in the system threatens this openness. At the end of the day Facebook is a for-profit corporation with its own interests, and nothing prevents it from using its power to squash competitors, or using its data to entice advertisers at the expense of basic privacy.

The idea of adding a social layer to the web is a powerful one, but having it controlled by a single company is scary. Facebook’s Open Graph is less of an Open Graph, and more of a Facebook Graph. We can only hope that a truly open standard emerges so that the future of the Internet isn’t at the whim of a historically closed company.

Fixing the USPTO for America’s businesses – by “Alexander F”

“Believe it or not, in our patent office — now, this is embarrassing — this is an institution responsible for protecting and promoting innovation — our patent office receives more than 80 percent of patent applications electronically, then manually prints them out, scans them, and enters them into an outdated case management system.  This is one of the reasons why the average processing time for a patent is roughly three years.  Imminently solvable; hasn’t been solved yet.”

— President Barack Obama, Forum on Modernizing Government, White House, January 14, 2010

While growing tides of patent reform may be starting to roll in through congress, one of the biggest problems facing the state of patents in America has yet to be addressed: the efficiency of the US Patent and Trademark Office.

As it currently stands, the USPTO is in a state of tremendous disorder. It has a backlog of over 1.2 million patent applications. That is up from a backlog of 770,000 applications in the middle of 2009. This growth has been unprecedented, and sadly the USPTO has been unable to keep pace with this exploding growth.

The efficiency of this government agency has long been a joke of attorneys, due in part to the technological gaps that the President noted above, but also due to a long and continuing tradition of underfunding and understaffing. In mid-June of last year, the USPTO had in its employment 6,285 patent examiners. This staff can only process about 450,000 applications each year, which has made it impossible to handle the growling load of patents sitting in their back yard.

The Patent office itself aims for an 18-month turn around for each of the applications it receives, yet the average processing time currently is estimated to be around 3.5 years, with more complex applications taking even longer. At this rate, the USPTO is only going to sink deeper and deeper into a hole of patent filing backlogs without a ladder to climb out.

So what needs to be done?

Congress seems more worried about the lawsuits surrounding patent filings, acknowledged by the current patent reform bill in the works that notably lacks any solutions to this backlog of gross inefficiency. I feel that while their intentions are good, they have their priorities in the wrong place. Right now, we, as a nation, are trying to pull ourselves out of the financial sinkhole of 2008, and we’re actually getting there, slowly but surely. In the grand plan for economic improvement, much attention has been given to the encouragement of small-business growth and how to best achieve that. The fact is, many small businesses rely heavily upon their ability to secure patent rights for new and exciting ideas before deciding to bring their ideas to the market. While the legal battles around patent violations may have gotten a little ridiculous, and congress should step in, these battles affect the few, while small business growth effects the nation.

Those 1.2 million backlogged patents provide an incredible amount of wealth for the American economy and these small business. It is foreseeable then that by processing these applications more efficiently, the economy as a whole may feel a boost from small business finally securing patents for their exciting innovations. Companies will begin. Jobs will be created. Markets will re-expand.

Thus, I believe that the focus of patent reform right now needs to drive towards improved efficiency and funding. It needs a larger staff and resource system that can support the demand being placed on the system. It needs the tools to make the processing of applications not only digital, but also easy to submit, maintain, and find. It needs to become a component of the bureaucratic American government that is no longer a laughing stock of nation. Congress needs to help the USPTO get back on its feet, evolve into the digital age, and help America recover and thrive once more.

The Latest Attempt at Patent Reform – by “Avi S”

The need for patent reform has been repeatedly addressed over the past few years in Congress. The latest bill updating the Patent Reform Act of 2009 is the fourth consecutive attempt to restructure the patent law since 1952. Over the past fifty years, technology has dramatically transformed the market and has highlighted the dire need for patent reform.

Proponents of reform describe how the current system offers low-quality patents with broad claims, thus providing high earnings to manufacturers. With inconsistent court rulings, and common overcompensating damages, patents have in a sense strayed from the innovative purpose they were intended to promote. More recently, the Supreme Court has started to work towards improving patent quality and remedies.

The US Senate lead by Vermont Senator Patrick Leahy, Utah Senator Orrin Hatch and NY Senator Chuck Schumer have announced a bipartisan agreement to more directly address these problems. “The Patent Reform Act” aims to improve “patent quality” and to create an efficient system to enable a more strict granting of patents. Moreover, this Act serves to restrict damages calculations in many cases. With this system, broad claims will not succeed in deriving patents and will be prevented from hindering innovation in the marketplace.

Sen. Leahy

Among the various reforms, this Act creates a “first-window post-grant process” which is meant to provides a method to challenge patents that should not have been created in the first place. Additionally, the Act changes the law to give a patent to the first inventor that files for one and gives permission for other parties to challenge any of the pending applications. Furthermore, this Act will change the troubling and ineffective administrative review of patents; an adversarial inter-parties review led by a group of patent judges serves as a potential alternative to litigation. This Act hopes to restore the value of patents to serve their purpose. By reducing costs and creating a more productive process, the bill plans to create more jobs, and more freely allow for innovation.

Among the congressmen supporting the Act, various high-tech, pharmaceutical, and manufacturing companies as well as universities support this reform. According to Senator Leahy, the Act “is fair, reasonable and necessary for our continued economic development.”

Critics of the bill maintain that these revisions will work in favor of patent violators instead of inventors, and thus impede future innovation in America as it limits the patent owner from enforcing certain rights. Some argue that this reform should remain the task of the courts and not the congress. Others argue that the bill does not go far enough and lacks the proper reforms necessary to affect change.

If this bill passes, time will reveal as to how well these reforms advance the goals of promoting innovation and will demonstrate what reforms are still necessary. But until then, the debate will likely go on for some time.

Ready, Set, File! – by “Reynolds H”

On March 4th, United States Senate Judiciary Committee bipartisan leaders released the details of legislation aimed at reforming US patent laws. The new bill makes significant steps toward resolving longstanding conflict in efforts to revamp US law for patent quality and efficiency, and make it more compatible with international laws. The new bill, referred to as the “managers’ amendment“, is being said to provide a much needed boost to innovation. However, the bill must go to the full Senate for a vote, and must be passed in the House of Representatives as well.

The “Managers’ Amendment”  includes nearly all of the improvements to the patent laws that were part of the  Patent Reform Act of 2009, and contains changes to greater balance all users of the patent system. One of the reforms is particularly intriguing, the transition to a first-to-file system. 

Currently, the USPTO runs under a “first-to-invent” system, which as the name suggest, gives rightful ownership of the patent to the first inventor of the innovation.  Under this system there is the potential for the person who invented the innovation first, but failed to file it with the USPTO to be retroactively awarded the patent, thereby possibly leaving the person who filed the patent first to be stripped of it.  This system creates many situations where patents must be disputed between two parties (or more).  The complications of the first-to-invent system are outlined in this example provided by Wikipedia:

Assume Tom conceives of a new mousetrap on January 1, 2006. Tom works diligently from January 1, 2006, to February 1, 2006, to prepare a patent application, and Tom files his patent application on February 1, 2006. Thus, Tom constructively reduced his invention to practice on February 1, 2006. Assume Jerry conceives of the same mousetrap on January 10, 2006, and diligently files a patent application on the new mousetrap on January 20, 2006. Under the first-to-invent system, Tom is entitled to the patent on the mousetrap, because he conceived the mousetrap before Jerry and still worked diligently to reduce it to practice by filing. Actual and constructive reduction to practice should be distinguished. Filing a diligently prepared application constitutes constructive reduction to practice only and can be antedated by evidence of an actual reduction to practice.  If both Tom and Jerry claimed the same invention, the USPTO would institute an interference proceeding between Tom and Jerry to review evidence of conception, reduction to practice and diligence.

As a further extension of the example, assume Tweety conceived of the same mousetrap on December 31, 1990. Tweety never told anyone about the mousetrap and did not work on reducing the mousetrap to practice for many years due to financial reasons. Tweety finally actually reduced the mousetrap to practice on February 15, 2006. Because Tweety did not diligently work to reduce the invention to practice in the period before others’ conception of the same invention, he is not entitled to a patent over Tom or Jerry.

Proponents believe that this transition will serve to rid the confusions of patent ownership leading to interference proceeding, which take away from both USPTO funds as well as degrade the overall efficiency of the US patent system.  Under this first-to-file system, the argument of who initially invented the innovation is moot, thereby allowing USPTO funds and time to be diverted towards further review of the influx of patents. The trainstion also enters the US into the same system that many of the worlds leading nations (UK and Japan), which relieves a major complication to a transition to a more internationalized patent system.  

However, there are plenty of potential drawbacks to this system transition.  Having a firs-to-file system gives incentive for inventors to file patents for innovations which may not be at a quality high enough to deserve a patent.  From a practical standpoint, if emphasis is placed on the speed at which a patent can be filed, the overall quality of the innovation may decrease as a factor of speed. The bill has not been passed yet, so only time will tell if the pros outweigh the cons, hopefully it doesnt turn patent seeking into one big rat race.

– by “Vance W”

The real threat to intellectual property are not new modes of distribution, but rather the blatant recycling — or better yet — the blatant regurgitation of preexisting works without regard for their original context. Nuances in duration and expiration in copyright (and it’s reform) don’t mean a thing if culturally we are aren’t equipped to reject false creativity. What’s the point of protecting something if we can’t agree on what’s worth protecting in the first place?

I want to “talk” about this conceptually — from the standpoint of a maker — in hopes of illustrating how lessening copyright restrictions and increasing distribution is ultimately better for cultural output. This would reject locking up icons, symbols and motifs in order to increase the public’s orientation towards original works of merit.


Here, Charles M. Schulz’s Charlie Brown and Matt Groening’s Bart Simpson are married to produce artist and designer Ethan Buller’s new collage. A highly sophisticated and playful illustration that — in the transparency of it’s references — becomes a NEW cultural product that (lucky for me) visualizes my very point. And, in doing so, is also illegal.

I would like to propose Kesha’s Tik Tok as the [legal] counterpoint to my argument that, like many things American, blatantly co-opt other global symbols that are then re-introduced back into pop-culture in order to produce commodity without thanks for the works original intention.

Kesha Tik Tok

Uffie The Party

Playing Clue 2.0: “It was Olivia in the Lab with a Candlestick” becomes “You’re 78% sure that guy in your English class is gay.” – by “Evin M”

If you understand this commercial, you know how to play the game, too.

In the age of Facebook, myriad popular sites offer user-friendly experiences online to willing participants of all demographics. The moderately tech-literate have become habituated to handing over personal information about themselves in exchange for access to internet services. That expository act is rarely mulled over by users, often because the alternative would shut them out from activities which are increasingly becoming socially requisite. Social networking platforms add social value to displaying personal information publicly thereby making it widely available to other users, the platform itself, and any privy third parties. For the most part, these high-profile identification games have stayed in the academic and commercial arenas, fueling research and product advertising. However, in February of 2009, 4chan users demonstrated their similar gumshoe prowess with more benevolent applications. 4channers were outraged by a video of a child abusing his cat, identified him, and contacted local authorities. The suspect was arrested, the feline rescued, and the high-stakes game of Clue formerly played by big buisness and universities yielded unforseen consequences. From these examples, it is easy to extrapolate the more sinister and invasive uses of data mining and brokering–an emerging frontier novel and amorphous enough to evade regulation to date.

Recent data mining experiments have further exposed the prevalence of persistent identity, a consequence of the public’s robustly developed online personalities which has tied real-world individuals to their internet personas more closely than ever before. These involved projects sift through nameless banks of detailed information about users’ internet behavior, using their habits to reverse engineer their real-life counterparts. At MIT, a pair of students facebook stalked over 4,000 profiles, analyzing details like friend circles and tracking identifying traits until they were able to predict whether a guy was gay from the information displayed on his page. Their final product–78% accuracy–is an undeniable indicator of the trail of breadcrumbs that lead from our online identities to our real ones. A similar project allowed Carnegie Mellon researchers to dig up people’s place date of birth and use that information to uncover their Social Security numbers.

Social networking tools have become the most powerful and comprehensive information aggregators ever, encouraging users to submit and disseminate every intimate detail of their lives. Supreme Court Justice Antonin Scalia was taken aback by the extent to which data collection could produce comprehensive infocaches, a gasping response that I found amusing. Specifically dedicated online services, like Netflix, are able to draw accurate, specific conclusions about their users from the information that they inevitably share as part of using the service. Federal discomfort and private litigation brought Netflix’s crowdsourced research into improving the accuracy of their users’ movie tastes to a screeching halt. The F.T.C. and Congress are squirming in their seats, making statements about how third parties, like advertisers, have access to far too much information about internet users’ habits. I find their surprised tone humorous–how did they not see this coming? Their shock has been more jaw-dropping than concrete action, illustrated by the flimsy mitigation suggestions like a “do not track” list similar to the “do not call” list. Wise up, guys. If you share information on the internet, odds are that you’re sharing it with the world. The architecture of the internet won’t allow that. More importantly, the users of the internet won’t allow that. We just need to get a little better at playing this new Clue, and leave fewer identifying footprints in our digital wake if we don’t like the implications of persistent identity.

Reforming Reform: Why We Must Take A Step Back Before Moving Forward – by “Kate H”

The ensuing outcry following recent news about the Anti-Counterfeiting Trade Agreement (ACTA) between countries like the United States, Japan, Canada, Mexico, Singapore, New Zealand, and United Arab Emirates demonstrates just how relevant copyright reform is today. Proposed systems like the 2003 Public Domain Enhancement Act offer steps in the right direction. For example, the PDEA would require copyright owners to pay a $1 renewal fee fifty years after the work is first published, and another $1 every ten years afterwards until the end of the copyright term if he wished to retain the copyright over his work. Otherwise, the work would automatically fall into the public domain. I posit that such a system would be a good step, as it would, at least in theory, encourage copyright holders to consider the value of their work, if holding the exclusive right to a commercially unviable work is really worth restricting others’ use of the work. It would also serve to identify which works are protected, and provide information on how to contact copyright owners– which would serve copyright owners’ interests, as parties wishing to pay for the use of a work would be able to find them.

I'd love to be able to contact these guys...

But while small steps measures like the PDEA are necessary and international trade agreements like ACTA are inevitable, we have larger debates to consider. How can we reconcile technologies like YouTube’s Content ID system with current copyright laws regarding fair use? When YouTube removed hundreds of Hitler “Downfall” videos yesterday, it relied on an algorithm to identify the use of a copyrighted work, but did not bother to identify whether any illegal activity had actually transpired. Now the burden is on the remixers to dispute the takedown. The Public Domain Enhancement Act would do nothing to prevent this detrimental behavior from happening.

And what about deciding which types of works deserve copyright protection? In Pamela Samuelson’s “Preliminary Thoughts on Copyright,” she mentions that in the past, people have been reluctant to make rulings on emerging copyright issues because they don’t know how the technology will develop. She notes that in a 1965 hearing, one man decided “it would be a mistake, in trying to deal with such a new and evolving field as that of computer technology to include an explicit provision [on computer-related uses] that could later turn out to be too broad or too narrow.” Making decisions on emerging and new forms of works can be nearly impossible, especially with the influence of industry (or not) lobbying for certain results. But what about works that have been around forever and have never achieved a definitive ruling. Consider fashion, for example.

Stripes!

Currently, there is no copyright protection for fashion, and certainly many designers consider their work worthy of the same protections afforded to architecture, which is protected under current law. In 2006, Congressman Bob Goodlatte introduced the Design Piracy Prohibition Act (DPPA), which would create a three-year copyright for fashion designs, and would arguably work to counteract counterfeiting in the industry. Producing a popular design requires a lot of effort and money and it is relatively easy for another designer to swoop in and recreate a successful item. While the industry has shunned such actions (see a recent New York Times blog post about this subject and other opinions) arguably, fashion designers would be spurred to innovate just because the fashion cycle turns so quickly, and new designs are always in demand. (Summer clothes just don’t work so well in winter…) Also, while the act would serve to protect the big players in the fashion world, indie designers would not have the legal resources to ensure that every one of their designs was not “closely and substantially similar” to an earlier work, and would be susceptible to legal intimidation that could prevent them for designing anything at all. For fashion, copyright would just harm the industry, waste resources, and provide no additional incentives.

That’s exactly the big picture issues we must consider now. The way we consume movies, books, podcasts, blog posts, news reports, songs, games, and other copyrighted works is changing rapidly, perhaps in the direction of the fashion industry. In many realms, we value most highly first responders and creativity, and the first responders actually serve to benefit from the dissemination of their ideas, or derivative works. We still throw around the statistic that only two percent of works between 55 and 75 years old continue to retain commercial value, but as we create more and more with new incentives (see the upcoming project on reputation economies) this no longer represents the truth. What percentage of Twitter posts retain commercial value even two days later? (Did you hear that Michael Jackson died?) What percentage of newspaper articles retain commercial value after a couple of months? We must reconsider the fundamental way we wish to protect these works. Before we can even work towards change, we must figure out what we want.

Bring your own camera to Canada – by “Heather R”

Earlier in the semester we read about the numerous extensions of copyright term in the twentieth century.  Often the reasoning for the extensions centered around compliance with some international standard.  The term was extended because otherwise “American artists would be at a disadvantage”.  That reasoning seemed strange to me, but I wrote it off as a rationalization for a decision that was motivated by thinly masked corporate interests.  Then predictions and leaks about ACTA started circulating, and it all got even more confusing.  Sense when did copyright enforcement require international cooperation and coordination?  I guess copyright is an issue with international scope, but it doesn’t seem like something that should be internationally enforced.  As a gut reaction, copyright just didn’t seem like an issue to be handled by a treaty.  An international summit did not seem like the appropriate venue to determine copyright policy that would be applied in the United States.  It felt strange to me, but I couldn’t really articulate why.

Then I read Lawrence Lessig and Jack Goldsmith’s article in the Washington Post, “Anti-counterfeiting agreement raises constitutional concerns”.  While reading that article I felt like a short kid standing behind a slightly taller and more articulate kid interjecting “Yeah, yeah, what he said” after each point the tall kid made.  Apparently there was a reason why I felt strange about ACTA: it’s enforcement would (possibly) be unconstitutional.  According to Lessig and Goldsmith, sole executive agreements have historically been used very rarely and for very specific purposes, and mandating copyright policies is not one of them.

Aside from the potential unconstitutionality of a sole executive order pertaining to intellectual property, intellectual property policy does not seem like a subject that is appropriate for international discussion.  It’s property law.  Copyright isn’t a human rights issue, it’s not something that has a “right answer” and “wrong answer”, there’s no good and evil.  Intellectual property is an abstract form of property that our government decided to protect in order to incentivize creation.  Even if you assume that every country wants to incentivize creation, not every country will agree on the most productive way to do that.  It’s nothing but strategy.  This means that even countries that share similar core beliefs may have different notions of how to protect intellectual property.  In Canada, for instance, there are special copyright rules for photography.  In Canada, the copyright of a photograph is owned by the person who . . . owns the camera? What?

This is a strange law, but is somewhat understandable considering the abstract nature of intellectual property.  There are no objectively right answers, so how can we internationally agree on one single policy?

PDEA: The First Step to Copyright Reform? – by “Avi S”

From the start of this course we noted the difficulties involved in applying copyright law to the digital age. Computers, the internet, and technology have created challenges and dilemmas that go far beyond what the writers of copyright laws could have envisioned. These laws were created in a pre-digital era before notions of remix culture or Google Book searches. Additionally, there are some 20 amendments to the 1976 Act which have added to the complexities and ambiguities of copyright law. As such, the most controversial topics have been dealt with largely based upon the view of each individual court; in this sense there is a big gap in the legal text in which an individual is subject to the whims of each court’s interpretation of the law. Paul Samuelson views the need for copyright reform as a continuous task: “If one considers, as I do, that the 1976 Act was the product of 1950/1960’s thinking, then a copyright reform process should be well underway, for copyright revision projects have occurred roughly every 40 years in the U.S.” Yet Samuelson correctly acknowledges the difficulty of arriving at this reform in practice.

Expansion of US Copyright Law

A bigger debate within copyright law is the issue of duration of copyrights. Earlier in the semester we read Mark Helprin’s article A Great Idea Lives Forever: Shouldn’t Its Copyright in which Helprin argues for preventing copyright works from ever entering the public domain. Yet as we discussed, Helprin’s viewpoint is in the extreme and contradicts the very essence of copyright and intellectual property law. As James Boyle explained: “Intellectual property is also supposed to create a feedback mechanism that dictates the contours of information and innovation production.” Thus, when interpreting or considering reform of copyright law one must recall the Framer’s intent “to promote the Progress of Science and useful Arts.”

Framers of the US Constitution

Now with this background, it is easier to analyze potential modifications of the duration aspect of copyright. Certain studies have revealed that a meager 2 percent of works 55 to 75 years old retain any commercial value–yet these works are prohibited from the public domain. This system appears to be in direct contradiction of framer’s intent in which a potential individual/artist is prevented from developing the work further without any loss to the initial creator.

The Public Domain Enhancement Act seeks to remedy this problem. In short, this bill attempts to place more works into the public domain, allowing more people to further innovation and create new works. Under the PDEA, copyright holders must pay a $1 renewal fee fifty works after the work is published and every ten years thereafter until expiration.

Personally, I find it hard to believe that such a tiny fee will deter copyright holders from extending their copyright. Even with no commercial value evident, I would think that a copyright holder would hold on to the rights with hopes of the small chance of an unforeseen future licensing project, or simply out of a desire to retain the rights. Yet, the PDEA maintains that history proves that a great majority of owners will actually pass up on holding the rights and that this nominal fee will do the job. Therefore, the PDEA would thereby pass more works into the public domain. A more important and practical consequence of the PDEA is that it will make it easier to identify and contact copyright owners of works. In the current state, one is liable for infringement for unintentionally violating another’s copyright; yet, it is increasingly difficult to even identify copyright holders without any registry. This aspect of the PDEA seems essential for ensuring that one can locate what works have protected status. Though, I am not sure how useful the works will be fifty years after publication, I believe that they may provide some benefit for innovators. An informative site on the PDEA can be found here.

While the PDEA seems to keep the framer’s intent intact, there are those who argue against passing the bill, particularly companies in the entertainment business. Wikipedia lays out the core argument of the Motion Picture Association of America (MPAA):

  1. “Congress had already “firmly rejected” the concept of copyright renewal in the Copyright Act of 1976, which eliminated the need for registration and renewal of copyrighted works.
  2. The $1 fee would harm copyright owners, particularly those with large numbers of active and potentially commercially viable works.
  3. The extension fee would encourage copyright restoration, a process that re-asserts copyright over a public domain work that originated outside the US and for which US copyright was not renewed.
  4. The benefits would fail to justify the administrative costs needed to set up and fund a registration system.
  5. The MPAA argues that current law already allows for the creation of derivative works via licensing and release of rights.”

In my opinion, many of these claims lack any real validity. Here’s how I see it:

1-Congress may have “firmly rejected” copyright renewal in 1976, but that was 34 years ago and things have changed. Different times call for different measures.

2- A $1 fee would “harm” copyright holders? Seriously? Even if someone has 1,000 copyrights would $1,000 really “harm” them? Moreover, if they are being harmed they must be losing out on some aspect of retaining the copyright. If they want to keep it, they will likely still be collecting revenue on the work and a $1 payment should not make a difference. At the same time, at least according to the PDEA, $1 may be enough to prevent copyright holding for purposes that harm society and are against the reason for the law itself.

4- The benefits of this system to society will likely outweigh any of the costs of registration. Additionally, a system identifying what copyrights are in place is crucial for viewing what works are copyrighted and helping one contact a copyright holder for licensing purposes. The benefits are enormous; the costs of organizing such a system will be paid off pretty quickly.

5- Sure the law allows for licensing; but with the PDEA, identifying a copyright holder will be much easier. Additionally, the whole point of the PDEA is to increase ability to create derivative works and place works in the public domain at that point. Yes there is the ability to use copyrighted works at the moment, but the PDEA wants to increase those works so as to promote the progress of the arts and sciences.


At the same time, I would hope that the PDEA is just the first step in opening up works to the public domain. It may be more effective, and likely more controversial, if the fee is increased and the fifty year renewal is decreased as well. Either way, the petition to members of Congress can be signed here.