Apple v. Psystar and the undying EULA question – by “Michael M”

psystar

The first major battle in a case with potentially significant implications for the enforceability of EULAs has just been fought. The result was disappointing. Judge Alsup, a federal judge in the Northern District of California, just granted Apple summary judgment on its copyright claims in Apple v. Psystar. Apple, of course, is the major California computer company. Psystar is a small Florida company that gained notoriety in April of last year for their “Open Computer,” which optionally shipped with Mac OS X installed (a legally purchased, retail copy of the operating system was also included in the box). Apple was predictably displeased with Psystar for selling what are essentially Hackintoshes and sued the company in June of 2008. Other Mac clone manufacturers have popped up, but Psystar became the movement’s figurehead. Its litigation with Apple promised to test the legality of the cloning business model. That test has so far gone very poorly for the cloners.
Continue reading “Apple v. Psystar and the undying EULA question – by “Michael M””

How Do We Fight Bad EULAs? – by “Michael L”

Not actually about fighting bad EULAs, but cmon, xckd is always a good thing.
Not actually about fighting bad EULA's, but c'mon, xckd is always a good thing.

After doing this week’s reading, it’s easy to get the feeling that there’s little we can do to fight bad EULAs. And let’s be honest, there isn’t much–at least for the individual user. That said, recently there have been cases where popular services have changed their terms of service because of the public’s distaste for a few egregious terms within them. Remember earlier this year when Facebook changed it’s TOS to say that they kept the rights to your content even if you got rid of your account? People got mad, they complained, and Facebook caved and went back to it’s old TOS. Similarly, there was a situation last summer in which it appeared Google’s Chrome browser’s terms of service gave Google the rights to anything you sent through the browser–again, after people complained, it was changed. While these situations were hardly the same (it seems Google’s TOS problems were the result of a mistake, whereas Facebook’s seemed more deliberate), they share in common the fact that the problem was fixed after enough people complained about it. This of course isn’t an entirely satisfying solution, but it is good to know that if people get angry enough, companies do respond.

The other important step one should take as a consumer is to actually make some attempt to read agreements before clicking through them–even if it’s just a quick skim. While there’s not much you can do if you don’t like the terms (except perhaps give your business to someone else), at least you’ll be aware of them. And sometime’s you’ll be pleasantly surprised (I’m a big fan of Google’s affirmation of my intellectual property rights, something about which I never would have known if I didn’t read the terms). If you want to be extra vigilant, you could even check the EFF’s “TOSBack” site from time to time: it’s a site that tracks changes to various terms of service agreements (there’s even an RSS feed if you’re uber-nerdy). After all, someone’s got to notice harmful changes to these agreements in order for people can get angry about them.

Lastly, I’d be up for creating some sort of EULA hall of shame, much like the EFF’s DMCA takedown hall of shame. While there already seems to be a site that attempts to do this, it’s far from well done or thorough (check it out at http://www.eulahallofshame.com/). Such a site, if done well, would be useful in that it would draw attention to particularly bad abuses of licensing agreements. And, after all, ridiculing sketchy practices by companies is fun. Let me know in the comments if you’re interested.

In honor of Cory Doctorow, I’d like to end this blog post in the same way he has ended several of his about blog posts EULAs (and I can because Boing Boing uses a CC-BY-NC license, I’m giving him credit [Thanks Cory!], and I’m gonna go ahead and say this blog post is CC-BY-NC-SA, since I can’t seem to find a licence for the site as a whole):

READ CAREFULLY. By reading this blog, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Reading the Fine Print – by “Michael C”

Handcuffed by the EULA
"Handcuffed" by the EULA

A few weeks ago, the highly anticipated video game Borderlands was released on video game consoles. Many PC gamers were also eagerly awaiting the release, but were forced to wait a little longer when the PC version was delayed for a week. Despite the delay, some industrious gamers were able to purchase copies from stores selling the boxed PC version ahead of its release date. These gamers excitedly returned home, popped in the disc, and installed the game. However, when they tried to play, the online servers refused to authenticate their copies before the release date, so the game did not work.

Why did this happen? According to Hal Halpin of the Entertainment Consumers Association (ECA), unlike console copies of the game, where a purchaser is buying the actual product, purchasers of the PC game are buying a license. The disc that a gamer purchases is only a means of providing the game’s data, not a product in itself. In purchasing a license, one is restricted to the terms that the publisher includes in the End-User License Agreement, or EULA. In this case, the EULA states that the game requires activation, and that the activation will not be made available until the release date. By agreeing to the EULA, consumers are agreeing to these conditions.

Inevitably, early purchasers of the PC version of Borderlands were quite surprised that their copies were temporarily useless. According to 2K Games, the publisher of Borderlands, they shouldn’t have been—the restrictions on activation were outlined in the game’s EULA, presented to the player during the installation process. Then again, who actually reads the EULA? This particular case speaks to a larger problem involving EULAs. They tend to be long, complicated documents that most people skip right over in their eagerness to install and try out the new application or game that they’ve just purchased. However, these documents are important to the consumer, as they outline the terms of the legally binding license that the consumer is entering into by clicking “yes” (hence the term clickwrap).

Publishers should make these EULAs more user-friendly, by simplifying the language used in them, putting the most important parts at the top (where users are most likely to see them without scrolling), and possibly moving towards some type of standardized EULA. Fortunately, the ECA is working to push publishers to create more open, standardized EULAs. If the EULA is made more accessible to the average consumer, then he or she will be more likely to spend a few moments reading it to understand the terms of the license, rather than automatically hitting “yes” to avoid the complicated and lengthy document.

Technology is no match for laziness. – by “Michelle V”

Its totally a reasonable modern analogue.  Jefferson would have been all about crypto.
It's totally a reasonable modern analogue. Jefferson would have been all about crypto.

From the Oxford English Dictionary online:

cybersecurity n. – security relating to computer systems or the Internet, esp. that intended to protect against viruses or fraud.

Protecting yourself is hard. Nothing we do in the real world is ever perfectly safe and secure, and yet for some reason people expect that when information has been digitized, it should be safe. Maybe the thought is that if you can’t see what happens to your credit card information when you buy things online, then no one else can either. Wireless internet is almost like magic – you type things into your computer, click a button, and poof! The information just sort of whizzes through the air from your computer to someone else’s computer! Look, ma, no strings!

… Right?

Obviously not, but that’s the way most of us treat digital information. Since we don’t see what happens to it, we think that no one else can.

There have always been hackers. There will always be hackers. There exist protection measures we can take to protect our personal information, and they are pretty good. They are not, however, perfect. MD5, designed in 1991 by Ron Rivest, was one of the most widely-used cryptographic hash functions. A flaw was discovered in 1996, and then in 2004, further analysis revealed that it was much less collision-resistant than it had previously been thought to be. A weakness has also been discovered in SHA1, the other widely-used cryptographic hash function. Because of this, these hash functions are being phased out of use (MD5 is no longer used for protocols like digital signatures, which rely on low collision rates), and will be replaced with a similar function, SHA2. Currently, the National Institute of Standards and Technology is in the middle of selecting a new hash function (SHA3) that will replace SHA2 if – or really, when – its weaknesses are discovered. Similarly, AES (having replaced Triple-DES, which replaced DES) will eventually need to be replaced as its weaknesses are revealed and exploited.

Ultimately, there is no perfect technological solution to cybersecurity: we can implement all the provably correct programs we want, but this will not protect us from people standing behind us as we enter our private information. Even so, this doesn’t mean we can’t do anything. Just because someone might take a sledgehammer to your door doesn’t mean you shouldn’t lock it – there is no reason to grant people easy access to your belongings, whether physical or digital. Just because we can find strings that produce collisions doesn’t mean that it’s easy to find a collision for a specific string. Security (read, “cryptographic”) measures exist. Use them. Don’t be stupid. Sometimes bad things happen. Remember that secrecy does not equal security. And enable https on Gmail, lest Brad read your chat logs.

Your Big Brother is Watching You! Actually. – by “Andrew C”

surveillance-cameras-400

In1949, George Orwell wrote 1984, a novel describing a nightmarish future in which England, fallen to socialism and renamed Airstrip One, is ruled with an iron fist by an oppressive, authoritarian government. In the novel, Orwell describes how, with adequate technology, a regime could dominate the masses through constant surveillance.  Specifically, in the dystopian nightmare of 1984, “Big Brother” monitors every citizen 24 hours a day through the television set in their living room.  Ultimately, Orwell’s vision never truly came to pass, as in the end it is simply impossible for a government to monitor every individual at all times.  However, as surveillance cameras become increasingly inexpensive and the ability to share media over the internet becomes increasingly efficient, the actions of everyday individuals have come more and more under the eye of public scrutiny.  In today’s world, any individual walking down the street can reasonably expect to be videotaped by anything ranging from remote surveillance cameras, to roving Google Street View vans, to anybody carrying a cell phone purchased within the last five years and to have that video posted online.  As such, those who commit shameful, illegal, or simply bizarre acts can and must accept that their actions may be broadcasted to and scrutinized by an anonymous, faceless horde of users.

Ironically, England, the setting for Orwell’s 1984 has become one of the single most disturbing examples of the loss of privacy in the modern world.  It is currently estimated that there is one camera for every fourteen British citizens. Recently, the British government has quite literally set aside £400 million to install 24 hour surveillance cameras in the homes of 20,000 families in order to ensure that children attend school, go to bed on time, and eat proper meals.  Even better, the government plans to hire private security contractors to perform home checks upon the families in question.  The irony is beyond overwhelming.

However, it has nevertheless become increasingly obvious that no one government can monitor an entire population internet-eyes1at all times.  Thankfully, the anonymous masses of the internet have proven to be more than willing to fill in the gaps the government leaves behind.  As such, we increasingly see “Big Brother” being replaced with your actual big brother (and your neighbors, co-workers, and complete strangers).  In England, one program in particular, called Internet Eyes, would have sent George Orwell into fatal convulsive seizures.  The program, which will go live in Stratford-upon-Avon in late November, aims to harness the power of Web 2.0 by directly connecting live feed from surveillance cameras to a vast swarm of users.  The theory behind the program is that with more eyes watching the footage more crimes can be averted.  The website is being promoted as a sort of game which everyday individuals can play in their spare time.  Users, who will be allowed to register for free, will be allowed to view real-time random video feeds from participating establishments across the country.  Any viewer who spots suspicious activity from a specific camera can anonymously inform the camera owner.  Users will then receive a certain amount of points based upon the quality of the alert.  The user with the most points at the end of the month will receive a £1000 prize.  Fantastically, Tony Morgan, one of the founders of Internet Eyes, claims that it will “give people something better to do than watching Big Brother when everyone is asleep.” Put another way: why watch Big Brother when you can be Big Brother?

However, in the end, this website is not simply another game.  Rather, it is a breach of privacy more distasteful than anything ever imagined by George Orwell.  While Internet Eyes may very well be an effective way of preventing vandalism and shoplifting it also encourages private citizens to spy on their neighbors.  Furthermore, if previous cases have shown us anything it is that the scope and potential damage of public scrutiny can be far greater than that of a government investigation or surveillance.  While Internet Eyes will in fact connect users to random video feeds it is impossible to eliminate the possibility that Internet Eyes could reveal potentially damaging private information (i.e. sexual orientation, political affiliation, etc) to one’s loved ones, co-workers, or neighbors.  How could an individual stage a protest knowing that his or her boss might not only disapprove, but be watching him? As such, it is nearly impossible to argue that websites like Internet Eyes, and the general trend towards increased scrutiny in the Web 2.0 world, are increasingly having a chilling effect upon free speech and expression.

However, in the end the question is not necessarily straightforward at all.  The same features that make constant surveillance and public scrutiny such an insidious problem in today’s world have proven to be an incredibly powerful and valuable tool in fighting oppression across the globe.  Easy access to video cameras and internet access, and the rise of the generative Web 2.0, have ensured that, in the modern world, those who are oppressed are finding it increasingly easy to record and disseminate their messages.  As such, in the end, we need to find some way to strike a balance between the benefits of a free and open internet and the dangers of the constant threat of surveillance.

How can we accomplish that? Quite frankly, I’m not sure, but this seems like a fantastic bet.

Cybersecurity Regulations on the U.S. Power Grid – by “Yingqi G”

A 60 Minutes report last week described the possibility that hackers were behind massive power outages in Brazil in 2007 and 2005. The report also describes how the U.S. infrastructure is likewise vulnerable to hacking. As we’ve seen with incidents like Georgia, hackers are now often politically motivated professionals serving national governments. This leads to cyber-warfare, where a government employs hacking to cripple functionality or steal intelligence from other governments. And if one government does it, then every other government establishes its own cyber-units, and we now have an arms race on the internet. The New York Times has a good article on the issue.

Instead of discussing cyber-warfare at a more general level, I’d like to talk about cybersecurity concerns in the U.S. electrical infrastructure. To understand the issue, we need to look at the key players in the industry, who they are, and how the government is trying to regulate them.

The U.S. power industry is made up of at least two levels of players: there are utilities and infrastructure providers. Utilities generally purchase, deploy, and maintain physical infrastructure purchased from the infrastructure providers. In some cases, these companies also buy and sell energy from other power companies to meet local demand. Examples of such companies include Connecticut Light and Power and Southern California Edison. You pay these guys for your electricity.

The much bigger players are the infrastructure providers like General Electric, Samsung, and Areva. These corporations manufacture the power plants, control systems, and other infrastructure that utilities purchase and install. GE is U.S. based, but Samsung and Areva are German and French respectively, both with sizable stakes in the U.S. market. This raises fairly obvious national security concerns regarding foreign companies manufacturing critical U.S. infrastructure.

Over the past several decades, these power infrastructure corporations have made their monitoring and control systems Internet enabled, so their software now runs over the same networks as the rest of Internet traffic. This is an advantage because it simplifies network infrastructure for supporting the power grid’s management systems, but it also makes the power grid vulnerable to traditional Internet based attacks.

Fortunately, the U.S. government and international agencies are aware of the power grid cybersecurity problem, and are actively legislating and setting standards for power grid security. In the US, we have the Federal Electric Regulatory Commission (FERC) under the Department of Energy. FERC is requiring compliance with the North American Electric Reliability Corporation’s Critical Infrastructure Protection standard (NERC CIP) in 2010. NERC CIP details requirements and implementation plans for North American power utilities to harden themselves against physical, and more importantly, digital intrusion. Additionally, CIP requires mitigation plans in the event of actual attacks.

The National Institute of Standards and Technology also has draft SmartGrid cybersecurity standards, which address Smart Grids, an incoming technology power companies are introducing to increase efficiency and robustness in the grid. Internationally, we have the International Council on Large Electrical Systems (CIGRE), which specifies reference architectures, including the next generation Energy Management System Reference Architecture, which includes cybersecurity as a key component.

Going back to the national level, the Federal Electric Regulatory Commission is requiring compliance according to NERC’s schedule of compliance requirements culminating in a deadline in 2010. NERC CIP includes not only security features, but also auditing, incident reporting, and incident response and mitigation plans in the event of a security breach. NERC specifies timelines for implementation and auditing as well as non-compliance fines of up to $1 million per day. These deadlines put the utility companies on a very tight schedule where total compliance as required seems fairly unlikely for a significant number of the companies.

Unfortunately, in the end NERC CIP is just a step forwards, not the ultimate solution. Even if all utilities comply with NERC CIP requirements, we’re still left with the axiom that no piece of complex software is entirely secure. Acknowleding this, NERC CIP requires mitigation plans to control damage when an actual breach happens. NERC is also in the process of revising CIP, with the revisions set for publication in 2010 or 2011.

As the 60 Minutes video also reports, we already know that foreign agencies have penetrated the U.S. power grid’s networks and systems. In the event of a military action, there is almost no question that foreign governments will consider using these keys to damage U.S. electrical infrastructure. Considering how little the government regulated the power industry with regards to cybersecurity, I’m glad that the government is now locking down the power grid in its wider effort to secure critical infrastructure. In the end, NERC CIP is probably not enough, but it’s a good start.

Google unveils “unprecedented” privacy dashboard – by “Samuel D”

Much has been made of the dangers of trusting all your private data to Google. Not only does the search giant host your emails and contact lists, but your entire search history, your blog posts, your documents, your YouTube videos, and even your phone records. In response to growing concerns as to what they might do with all your data, Google released the Google Privacy Dashboard this week–claiming to be the “first Internet company” to offer such a product. The official blog post explains:

Over the past 11 years, Google has focused on building innovative products for our users. Today, with hundreds of millions of people using those products around the world, we are very aware of the trust that you have placed in us, and our responsibility to protect your privacy and data. Transparency, choice and control have become a key part of Google’s philosophy, and today, we’re happy to announce that we’re doing even more.

The Dashboard aims to give users greater transparency and control over their data. Users log in to their account and can view exactly what data Google hosts from over twenty products. For each product, the Dashboard provides direct links to the privacy settings for that service. Google concludes, “The scale and level of detail of the Dashboard is unprecedented, and we’re delighted to be the first Internet company to offer this — and we hope it will become the standard.”

Given Google’s grand proclamations about the groundbreaking Dashboard, response to the announcement has been subdued at best. Advocacy group Consumer Watchdog has been one of the most vocal opponents to Google’s privacy policies. The organization said the Dashboard was a step in the right direction, but wanted Google to give “the ability to stop being tracked by the company and to delete information associated with their computer’s IP address from the Google servers.” One advocate added, “If Google really wanted to give users control over their privacy it would give consumers the ability to be anonymous from the company and its advertisers in crucial areas such as search data and online behavior.” The group suggested that Google added a “Make Me Anonymous” or “Don’t Track” button to each service listed in the dashboard.

Outside of advocacy groups, response to the Dashboard was mostly negative. Tech blog Mashable wrote, “Sure, it’s nice to have all these in one place, should you ever want to review all your private information stored at Google at once, but there’s nothing really new about this list; you could even call it a privacy-related compilation. Unfortunately, it’s also an unpleasant reminder of just how much data you’re giving out to Google (and other online services).” Valleywag noted, “But, really, it just scares the crap out of you. Google knows all.”

The Dashboard clearly was not received as Google anticipated–it certainly is only seen as the first step in the right direction. Will they allow users to remain anonymous and prevent data from being attached to their IP address? Will they allow users to instantly delete all their data from Google’s servers? Would they allow the police to subpoena access to a user’s Dashboard? Only time will tell how Google will live up to its promise of “choice and control.”

– Google Dashboard explained on YouTube.

– Check out what Google knows about you here: www.google.com/dashboard

Positive Aspects of Information Brokers – by “Christian C”

Garindan - An Information Broker
Garindan - An Information Broker

In looking at the threat to privacy posed by information brokers indexing information and making it easily available, one might wonder whether there are any positive aspects to the business. Since Zabasearch, one of these sorts of information brokers, claims to just be a search engine that searches public records, one could first turn to Google to try and learn about the “positive aspects of information brokers.” Unfortunately one would be disappointed, if one were to search with quotes one would find zero results (presumably more now that this post uses that as a title), and without quotes around one million results. However looking through the first group of search results one finds no entries that actually appear to be about the business of selling other people’s personal information. There are things about real estate brokers, and how to create computer software to help search libraries and even information about information brokers, but nothing about why they are a good thing.

One next might go examine the website of such an information broker, again one might choose Zabasearch. There one will find a nice frequently asked questions page, discussing the fact that they are simply a powerless search engine without the power to adjust the information in their index, as it is all from public records (which ignores the fact that for a fee they offer the expedited service of blocking records, and if a record can be blocked it could equally be modified). There is also a  link to findlaw.com claiming to be reviewed by them, this link looks at the regrettable fact that Zabasearch appears to be legal (given the character of the link the editors of findlaw.com found it necessary to make it clear that they are not associated with Zabasearch, nor do they endorse it in anyway).

At this point the immediately obvious places to look for the positive aspects of these businesses have been exhausted, so one must attempt to discover them for oneself. If one leaves aside free market type arguments that all information should be as easy to find as possible in order to let the market function, and assume that privacy generally is something that should be protected, then there should be some benefit to these services. The most obvious of these benefits is that by making public records easily searchable, the public is made aware of the vast quantities of information available as part of the public record that one might not necessarily want to be publicly available, or at least want restrictions placed on the aggregating of the information. However, this is not generally a good reason, since the information brokers are a substantial cause of the problem. Another justification is that by making finding information easier and cheaper those who are unable to afford the services of private investigators or the time to search records themselves are able to access the information so as to pursue things like child support payments which the public sector does not necessarily have the time and resources to pursue. The counter point to the last is that it also makes it easier for less sophisticated stalkers or abusive former spouses to track down others.

Overall it seems that there are few positive aspects to the existence of information brokers.

Privacy? There’s an app for that – by “Stephen D”

As evidenced by the recent series of Apple commercials which state that “There’s an app for that,” the market for applications is skyrocketing.  Recently, the iPhone app store approved the 100,000th application.  While these applications may increase the functionality of these services, they create a huge increase in breaches of privacy.  Recently, the iPhone game developer Storm8, which is responsible for the two popular games Vampires Live and iMobsters, was sued for allegedly collecting phone numbers of iPhone gamers without their consent.  This lawsuit states that the games included “malicious software code” that transmits the phone numbers of anyone who plays the game back to Storm8.  The company claims that this was a “bug.”  The most concerning part of this incident is not that phone numbers were stolen (though that is a valid concern), but that Storm8’s developers were able to get “malicious software code” past Apple’s app store approval process.  While phone numbers are relatively benign, it’s possible that a more deviant software designer could get more malicious code past this approval process; for example, many customers store email addresses and passwords on their iPhone.  A hacker with access to these, though a corrupt app, could easily hack into someone’s email account steal their identity.  Similarly to the iPhone, Facebook also has a sizeable number applications.  These applications also create gaps in privacy.  For instance, many of Facebook’s online quizzes are popular.  However, in order to take these quizzes, users must allow these applications to access their profiles, which contain lots of personal information.  Additionally, some of these online quizzes try to lure users into giving away valuable data.  One online quiz asked not only for a person’s middle name, but also for their mother’s maiden name.  All of these new apps, both in Facebook and the iPhone, come at the cost of security.

So who is responsible for these apps?  Since Facebook and Apple both closely regulate their processes of approval, it seems obvious that Facebook and Apple should be responsible for this.  However, that does not necessarily mean that they should be held liable.  Facebook has a stronger legal stance with this respect, as the first time a customer wants to use any Facebook application, they have to agree to a privacy policy, which states that this application is authorized to access all their data.  Similarly, I would recommend that Apple have a comparable agreement when downloading apps, in order to put the responsibility on the developers.  However, Apple and Facebook should keep a close eye on this issue, to avoid bad publicity.

Researching online subjects: a few conflicts – by “Chuen-Yee C”

As the use of web 2.0 tech continues to grow, researchers and developers who decide to study the effect of MMOGs, forums, and social networking spaces must face a new strain of ethical dilemma. These new environments are ripe for research on human interactions, social structures, and the nature of fandoms and group generativity; but do the usual rules of research ethics apply to non-traditional spaces?

Rules of ethics are around to protect human subjects from harm. People online are just as human, but the ways in which research is conducted and informed consent required still remains a somewhat undefined area. Informed consent is required for research, as well as the assurance that the researcher will not let the subject come to harm or cause harm. In an online setting, if a researcher is going to be interacting with or studying people, these people have the right to be told, given the chance to opt-out, and be assured that they will not meet with any harm. But how do you make sure you don’t hurt someone when you’ll probably never know if it happens? Interacting online means that you may not know how something really affects them. People get depressed and can commit suicide because of things that happen to them online or in virtual spaces; real psychological harm can be inflicted.

The experimental model for research doesn’t exactly work when studying existing spaces; researchers are more likely to take a field research approach. But what happens when someone in the “field” doesn’t want to participate? If the “field” is perceived to belong to the users before the researcher came along? This obviously causes problems for the researcher’s desire to study the space, and there’s no clear answer as to what form of reconciliation should take place.

The anonymity that comes with being a research subject should not be conflated with the default anonymity that most people assume online. In real life, one glance at a subject or at least some related background information (untied to the subject’s name) can reveal if they are part of a “vulnerable population”—children, the disabled, mentally unstable, and so on. However, online it is hard to determine who falls into one of these categories and who doesn’t. To borrow a concept from Rawls, in dealing with the online populace we have to assume a stance behind a “veil of ignorance” and afford everyone the protection given the “vulnerable populations.” In assuming that everybody is vulnerable, we can avoid ethical liability.

Confidentiality is another issue; nobody knows anything about anybody besides what they choose to reveal most of the time, but as demonstrated in the Scalia situation, publically available information can be readily compiled and trends inferred. When dealing with social media, there might be a great deal of personal information within the researcher’s grasp. Screennames are just as a part of people’s identities as anything else, and can leave a (somewhat incriminating) paper trail, if researched thoroughly enough. Gamer tags and forum screenames may go back for years; personally speaking, I’ve carried the same screenname for over ten years.

And what about the researchers themselves? There are a myriad of perceptions about the Internet, social networking sites, forums, online games, and the people who use or play them. If a researcher can’t approach the subjects or subject matter with an unbiased position we expect in lab studies, they probably shouldn’t be studying it. Conflicts of interest may also emerge, say if a researcher’s relative is a marketer at a firm that uses social networking apps to market their products.

Online spaces are used more and more for social, political, recreational, and economic purposes and have great potential to reveal a lot about ourselves as individuals and the larger social picture. However, the research on online spaces must be carefully gathered or else we may just end up perpetuating previous attitudes or gathering inaccurate data. The common definition of experiment doesn’t work well in online spaces; because the space is different, new ethical guidelines need to be laid out.