Your Big Brother is Watching You! Actually. – by “Andrew C”

surveillance-cameras-400

In1949, George Orwell wrote 1984, a novel describing a nightmarish future in which England, fallen to socialism and renamed Airstrip One, is ruled with an iron fist by an oppressive, authoritarian government. In the novel, Orwell describes how, with adequate technology, a regime could dominate the masses through constant surveillance.  Specifically, in the dystopian nightmare of 1984, “Big Brother” monitors every citizen 24 hours a day through the television set in their living room.  Ultimately, Orwell’s vision never truly came to pass, as in the end it is simply impossible for a government to monitor every individual at all times.  However, as surveillance cameras become increasingly inexpensive and the ability to share media over the internet becomes increasingly efficient, the actions of everyday individuals have come more and more under the eye of public scrutiny.  In today’s world, any individual walking down the street can reasonably expect to be videotaped by anything ranging from remote surveillance cameras, to roving Google Street View vans, to anybody carrying a cell phone purchased within the last five years and to have that video posted online.  As such, those who commit shameful, illegal, or simply bizarre acts can and must accept that their actions may be broadcasted to and scrutinized by an anonymous, faceless horde of users.

Ironically, England, the setting for Orwell’s 1984 has become one of the single most disturbing examples of the loss of privacy in the modern world.  It is currently estimated that there is one camera for every fourteen British citizens. Recently, the British government has quite literally set aside £400 million to install 24 hour surveillance cameras in the homes of 20,000 families in order to ensure that children attend school, go to bed on time, and eat proper meals.  Even better, the government plans to hire private security contractors to perform home checks upon the families in question.  The irony is beyond overwhelming.

However, it has nevertheless become increasingly obvious that no one government can monitor an entire population internet-eyes1at all times.  Thankfully, the anonymous masses of the internet have proven to be more than willing to fill in the gaps the government leaves behind.  As such, we increasingly see “Big Brother” being replaced with your actual big brother (and your neighbors, co-workers, and complete strangers).  In England, one program in particular, called Internet Eyes, would have sent George Orwell into fatal convulsive seizures.  The program, which will go live in Stratford-upon-Avon in late November, aims to harness the power of Web 2.0 by directly connecting live feed from surveillance cameras to a vast swarm of users.  The theory behind the program is that with more eyes watching the footage more crimes can be averted.  The website is being promoted as a sort of game which everyday individuals can play in their spare time.  Users, who will be allowed to register for free, will be allowed to view real-time random video feeds from participating establishments across the country.  Any viewer who spots suspicious activity from a specific camera can anonymously inform the camera owner.  Users will then receive a certain amount of points based upon the quality of the alert.  The user with the most points at the end of the month will receive a £1000 prize.  Fantastically, Tony Morgan, one of the founders of Internet Eyes, claims that it will “give people something better to do than watching Big Brother when everyone is asleep.” Put another way: why watch Big Brother when you can be Big Brother?

However, in the end, this website is not simply another game.  Rather, it is a breach of privacy more distasteful than anything ever imagined by George Orwell.  While Internet Eyes may very well be an effective way of preventing vandalism and shoplifting it also encourages private citizens to spy on their neighbors.  Furthermore, if previous cases have shown us anything it is that the scope and potential damage of public scrutiny can be far greater than that of a government investigation or surveillance.  While Internet Eyes will in fact connect users to random video feeds it is impossible to eliminate the possibility that Internet Eyes could reveal potentially damaging private information (i.e. sexual orientation, political affiliation, etc) to one’s loved ones, co-workers, or neighbors.  How could an individual stage a protest knowing that his or her boss might not only disapprove, but be watching him? As such, it is nearly impossible to argue that websites like Internet Eyes, and the general trend towards increased scrutiny in the Web 2.0 world, are increasingly having a chilling effect upon free speech and expression.

However, in the end the question is not necessarily straightforward at all.  The same features that make constant surveillance and public scrutiny such an insidious problem in today’s world have proven to be an incredibly powerful and valuable tool in fighting oppression across the globe.  Easy access to video cameras and internet access, and the rise of the generative Web 2.0, have ensured that, in the modern world, those who are oppressed are finding it increasingly easy to record and disseminate their messages.  As such, in the end, we need to find some way to strike a balance between the benefits of a free and open internet and the dangers of the constant threat of surveillance.

How can we accomplish that? Quite frankly, I’m not sure, but this seems like a fantastic bet.

Cybersecurity Regulations on the U.S. Power Grid – by “Yingqi G”

A 60 Minutes report last week described the possibility that hackers were behind massive power outages in Brazil in 2007 and 2005. The report also describes how the U.S. infrastructure is likewise vulnerable to hacking. As we’ve seen with incidents like Georgia, hackers are now often politically motivated professionals serving national governments. This leads to cyber-warfare, where a government employs hacking to cripple functionality or steal intelligence from other governments. And if one government does it, then every other government establishes its own cyber-units, and we now have an arms race on the internet. The New York Times has a good article on the issue.

Instead of discussing cyber-warfare at a more general level, I’d like to talk about cybersecurity concerns in the U.S. electrical infrastructure. To understand the issue, we need to look at the key players in the industry, who they are, and how the government is trying to regulate them.

The U.S. power industry is made up of at least two levels of players: there are utilities and infrastructure providers. Utilities generally purchase, deploy, and maintain physical infrastructure purchased from the infrastructure providers. In some cases, these companies also buy and sell energy from other power companies to meet local demand. Examples of such companies include Connecticut Light and Power and Southern California Edison. You pay these guys for your electricity.

The much bigger players are the infrastructure providers like General Electric, Samsung, and Areva. These corporations manufacture the power plants, control systems, and other infrastructure that utilities purchase and install. GE is U.S. based, but Samsung and Areva are German and French respectively, both with sizable stakes in the U.S. market. This raises fairly obvious national security concerns regarding foreign companies manufacturing critical U.S. infrastructure.

Over the past several decades, these power infrastructure corporations have made their monitoring and control systems Internet enabled, so their software now runs over the same networks as the rest of Internet traffic. This is an advantage because it simplifies network infrastructure for supporting the power grid’s management systems, but it also makes the power grid vulnerable to traditional Internet based attacks.

Fortunately, the U.S. government and international agencies are aware of the power grid cybersecurity problem, and are actively legislating and setting standards for power grid security. In the US, we have the Federal Electric Regulatory Commission (FERC) under the Department of Energy. FERC is requiring compliance with the North American Electric Reliability Corporation’s Critical Infrastructure Protection standard (NERC CIP) in 2010. NERC CIP details requirements and implementation plans for North American power utilities to harden themselves against physical, and more importantly, digital intrusion. Additionally, CIP requires mitigation plans in the event of actual attacks.

The National Institute of Standards and Technology also has draft SmartGrid cybersecurity standards, which address Smart Grids, an incoming technology power companies are introducing to increase efficiency and robustness in the grid. Internationally, we have the International Council on Large Electrical Systems (CIGRE), which specifies reference architectures, including the next generation Energy Management System Reference Architecture, which includes cybersecurity as a key component.

Going back to the national level, the Federal Electric Regulatory Commission is requiring compliance according to NERC’s schedule of compliance requirements culminating in a deadline in 2010. NERC CIP includes not only security features, but also auditing, incident reporting, and incident response and mitigation plans in the event of a security breach. NERC specifies timelines for implementation and auditing as well as non-compliance fines of up to $1 million per day. These deadlines put the utility companies on a very tight schedule where total compliance as required seems fairly unlikely for a significant number of the companies.

Unfortunately, in the end NERC CIP is just a step forwards, not the ultimate solution. Even if all utilities comply with NERC CIP requirements, we’re still left with the axiom that no piece of complex software is entirely secure. Acknowleding this, NERC CIP requires mitigation plans to control damage when an actual breach happens. NERC is also in the process of revising CIP, with the revisions set for publication in 2010 or 2011.

As the 60 Minutes video also reports, we already know that foreign agencies have penetrated the U.S. power grid’s networks and systems. In the event of a military action, there is almost no question that foreign governments will consider using these keys to damage U.S. electrical infrastructure. Considering how little the government regulated the power industry with regards to cybersecurity, I’m glad that the government is now locking down the power grid in its wider effort to secure critical infrastructure. In the end, NERC CIP is probably not enough, but it’s a good start.

Google unveils “unprecedented” privacy dashboard – by “Samuel D”

Much has been made of the dangers of trusting all your private data to Google. Not only does the search giant host your emails and contact lists, but your entire search history, your blog posts, your documents, your YouTube videos, and even your phone records. In response to growing concerns as to what they might do with all your data, Google released the Google Privacy Dashboard this week–claiming to be the “first Internet company” to offer such a product. The official blog post explains:

Over the past 11 years, Google has focused on building innovative products for our users. Today, with hundreds of millions of people using those products around the world, we are very aware of the trust that you have placed in us, and our responsibility to protect your privacy and data. Transparency, choice and control have become a key part of Google’s philosophy, and today, we’re happy to announce that we’re doing even more.

The Dashboard aims to give users greater transparency and control over their data. Users log in to their account and can view exactly what data Google hosts from over twenty products. For each product, the Dashboard provides direct links to the privacy settings for that service. Google concludes, “The scale and level of detail of the Dashboard is unprecedented, and we’re delighted to be the first Internet company to offer this — and we hope it will become the standard.”

Given Google’s grand proclamations about the groundbreaking Dashboard, response to the announcement has been subdued at best. Advocacy group Consumer Watchdog has been one of the most vocal opponents to Google’s privacy policies. The organization said the Dashboard was a step in the right direction, but wanted Google to give “the ability to stop being tracked by the company and to delete information associated with their computer’s IP address from the Google servers.” One advocate added, “If Google really wanted to give users control over their privacy it would give consumers the ability to be anonymous from the company and its advertisers in crucial areas such as search data and online behavior.” The group suggested that Google added a “Make Me Anonymous” or “Don’t Track” button to each service listed in the dashboard.

Outside of advocacy groups, response to the Dashboard was mostly negative. Tech blog Mashable wrote, “Sure, it’s nice to have all these in one place, should you ever want to review all your private information stored at Google at once, but there’s nothing really new about this list; you could even call it a privacy-related compilation. Unfortunately, it’s also an unpleasant reminder of just how much data you’re giving out to Google (and other online services).” Valleywag noted, “But, really, it just scares the crap out of you. Google knows all.”

The Dashboard clearly was not received as Google anticipated–it certainly is only seen as the first step in the right direction. Will they allow users to remain anonymous and prevent data from being attached to their IP address? Will they allow users to instantly delete all their data from Google’s servers? Would they allow the police to subpoena access to a user’s Dashboard? Only time will tell how Google will live up to its promise of “choice and control.”

– Google Dashboard explained on YouTube.

– Check out what Google knows about you here: www.google.com/dashboard

Positive Aspects of Information Brokers – by “Christian C”

Garindan - An Information Broker
Garindan - An Information Broker

In looking at the threat to privacy posed by information brokers indexing information and making it easily available, one might wonder whether there are any positive aspects to the business. Since Zabasearch, one of these sorts of information brokers, claims to just be a search engine that searches public records, one could first turn to Google to try and learn about the “positive aspects of information brokers.” Unfortunately one would be disappointed, if one were to search with quotes one would find zero results (presumably more now that this post uses that as a title), and without quotes around one million results. However looking through the first group of search results one finds no entries that actually appear to be about the business of selling other people’s personal information. There are things about real estate brokers, and how to create computer software to help search libraries and even information about information brokers, but nothing about why they are a good thing.

One next might go examine the website of such an information broker, again one might choose Zabasearch. There one will find a nice frequently asked questions page, discussing the fact that they are simply a powerless search engine without the power to adjust the information in their index, as it is all from public records (which ignores the fact that for a fee they offer the expedited service of blocking records, and if a record can be blocked it could equally be modified). There is also a  link to findlaw.com claiming to be reviewed by them, this link looks at the regrettable fact that Zabasearch appears to be legal (given the character of the link the editors of findlaw.com found it necessary to make it clear that they are not associated with Zabasearch, nor do they endorse it in anyway).

At this point the immediately obvious places to look for the positive aspects of these businesses have been exhausted, so one must attempt to discover them for oneself. If one leaves aside free market type arguments that all information should be as easy to find as possible in order to let the market function, and assume that privacy generally is something that should be protected, then there should be some benefit to these services. The most obvious of these benefits is that by making public records easily searchable, the public is made aware of the vast quantities of information available as part of the public record that one might not necessarily want to be publicly available, or at least want restrictions placed on the aggregating of the information. However, this is not generally a good reason, since the information brokers are a substantial cause of the problem. Another justification is that by making finding information easier and cheaper those who are unable to afford the services of private investigators or the time to search records themselves are able to access the information so as to pursue things like child support payments which the public sector does not necessarily have the time and resources to pursue. The counter point to the last is that it also makes it easier for less sophisticated stalkers or abusive former spouses to track down others.

Overall it seems that there are few positive aspects to the existence of information brokers.

Privacy? There’s an app for that – by “Stephen D”

As evidenced by the recent series of Apple commercials which state that “There’s an app for that,” the market for applications is skyrocketing.  Recently, the iPhone app store approved the 100,000th application.  While these applications may increase the functionality of these services, they create a huge increase in breaches of privacy.  Recently, the iPhone game developer Storm8, which is responsible for the two popular games Vampires Live and iMobsters, was sued for allegedly collecting phone numbers of iPhone gamers without their consent.  This lawsuit states that the games included “malicious software code” that transmits the phone numbers of anyone who plays the game back to Storm8.  The company claims that this was a “bug.”  The most concerning part of this incident is not that phone numbers were stolen (though that is a valid concern), but that Storm8’s developers were able to get “malicious software code” past Apple’s app store approval process.  While phone numbers are relatively benign, it’s possible that a more deviant software designer could get more malicious code past this approval process; for example, many customers store email addresses and passwords on their iPhone.  A hacker with access to these, though a corrupt app, could easily hack into someone’s email account steal their identity.  Similarly to the iPhone, Facebook also has a sizeable number applications.  These applications also create gaps in privacy.  For instance, many of Facebook’s online quizzes are popular.  However, in order to take these quizzes, users must allow these applications to access their profiles, which contain lots of personal information.  Additionally, some of these online quizzes try to lure users into giving away valuable data.  One online quiz asked not only for a person’s middle name, but also for their mother’s maiden name.  All of these new apps, both in Facebook and the iPhone, come at the cost of security.

So who is responsible for these apps?  Since Facebook and Apple both closely regulate their processes of approval, it seems obvious that Facebook and Apple should be responsible for this.  However, that does not necessarily mean that they should be held liable.  Facebook has a stronger legal stance with this respect, as the first time a customer wants to use any Facebook application, they have to agree to a privacy policy, which states that this application is authorized to access all their data.  Similarly, I would recommend that Apple have a comparable agreement when downloading apps, in order to put the responsibility on the developers.  However, Apple and Facebook should keep a close eye on this issue, to avoid bad publicity.

Researching online subjects: a few conflicts – by “Chuen-Yee C”

As the use of web 2.0 tech continues to grow, researchers and developers who decide to study the effect of MMOGs, forums, and social networking spaces must face a new strain of ethical dilemma. These new environments are ripe for research on human interactions, social structures, and the nature of fandoms and group generativity; but do the usual rules of research ethics apply to non-traditional spaces?

Rules of ethics are around to protect human subjects from harm. People online are just as human, but the ways in which research is conducted and informed consent required still remains a somewhat undefined area. Informed consent is required for research, as well as the assurance that the researcher will not let the subject come to harm or cause harm. In an online setting, if a researcher is going to be interacting with or studying people, these people have the right to be told, given the chance to opt-out, and be assured that they will not meet with any harm. But how do you make sure you don’t hurt someone when you’ll probably never know if it happens? Interacting online means that you may not know how something really affects them. People get depressed and can commit suicide because of things that happen to them online or in virtual spaces; real psychological harm can be inflicted.

The experimental model for research doesn’t exactly work when studying existing spaces; researchers are more likely to take a field research approach. But what happens when someone in the “field” doesn’t want to participate? If the “field” is perceived to belong to the users before the researcher came along? This obviously causes problems for the researcher’s desire to study the space, and there’s no clear answer as to what form of reconciliation should take place.

The anonymity that comes with being a research subject should not be conflated with the default anonymity that most people assume online. In real life, one glance at a subject or at least some related background information (untied to the subject’s name) can reveal if they are part of a “vulnerable population”—children, the disabled, mentally unstable, and so on. However, online it is hard to determine who falls into one of these categories and who doesn’t. To borrow a concept from Rawls, in dealing with the online populace we have to assume a stance behind a “veil of ignorance” and afford everyone the protection given the “vulnerable populations.” In assuming that everybody is vulnerable, we can avoid ethical liability.

Confidentiality is another issue; nobody knows anything about anybody besides what they choose to reveal most of the time, but as demonstrated in the Scalia situation, publically available information can be readily compiled and trends inferred. When dealing with social media, there might be a great deal of personal information within the researcher’s grasp. Screennames are just as a part of people’s identities as anything else, and can leave a (somewhat incriminating) paper trail, if researched thoroughly enough. Gamer tags and forum screenames may go back for years; personally speaking, I’ve carried the same screenname for over ten years.

And what about the researchers themselves? There are a myriad of perceptions about the Internet, social networking sites, forums, online games, and the people who use or play them. If a researcher can’t approach the subjects or subject matter with an unbiased position we expect in lab studies, they probably shouldn’t be studying it. Conflicts of interest may also emerge, say if a researcher’s relative is a marketer at a firm that uses social networking apps to market their products.

Online spaces are used more and more for social, political, recreational, and economic purposes and have great potential to reveal a lot about ourselves as individuals and the larger social picture. However, the research on online spaces must be carefully gathered or else we may just end up perpetuating previous attitudes or gathering inaccurate data. The common definition of experiment doesn’t work well in online spaces; because the space is different, new ethical guidelines need to be laid out.

Internet Passport? – by “Shirley B”

idog
In an interview last month with a reporter from ZDNet Asia, a website about technology offering such services as product reviews, news, and opinion, Eugene Kaspersky, the CEO of Kaspersky Lab, a top computer security company, told a reporter that he thought that the greatest problem facing not only the internet but computers in general was anonymity on the web. His frustration with anonymity on the Internet stems from what he sees as a basic design flaw in the Internet. His point is that the Internet was designed for a small, select, easily identifiable group i.e. the Department of Defense, and that when it was released to the general public its intended audience changed but it’s design did not. He does not propose that we limit the use of the Internet to a select few scientists once again, but that instead we now take the chance to redesign the Internet. His main ideas are a sort of Internet police force – an online Interpol. His second is this idea of driver’s licenses for the World Wide Web. Kapersky said that he felt that access to the Internet should be limited in the same that we limit and control access to airplanes, or driving – we should require something similar drivers licenses or passports to access the Internet. ISPs, he says, are not enough. He likens them to license plates; the plates are but one part of identification; to positively identify someone in the car they need to have their license. Besides the obvious practical problems with this proposal, there are a few philosophical problems I have with both his arguments and with his idea of ending anonymity for the Internet.
Simply because anonymity was not an original aspect of the design on the Internet does not mean that somehow it is bad for the Internet; this is simply a flawed argument. Why is anonymity a negative aspect of the Internet simply because it’s being used in a way it was not originally designed for? Take Ikea furniture, for example. There is quite the following for a site called Ikea Hacker, a blog that posts peoples’ various re-uses of their Ikea furniture, and instructions on how you can do the same. (This is, of course, not the strongest analogy in the world – I mostly just think that this is a great site, but you get my point – his point simply doesn’t follow from what he says about the original state of the Internet).
Of course his greater point is that anonymity and the popular voice somehow hurt the Internet. However, in my opinion, the benefits of anonymity far outweigh the negatives. Even if, in the United States, we don’t legally curb free speech, socially we certainly do, and some people even self-censor and would be far more hesitant to speak if anonymity were not an option. As we read last week in “The coming-out stories of anonymous bloggers:”

“There are things that you know, or that you feel sort of in your heart of hearts, that you might not want to put out there in a public way” with your name attached, she said. “If people always spoke without filters, we’d learn a lot more.”

Her point is not that it makes people happy to be able to anonymously post mean things about others online, or that it’s useful for criminals to be anonymous. It’s that it is socially and politically useful to have anonymity. We need the kind of fearless honesty that is possible in most cases only with anonymity. The amount of information, both facts and opinions, available to us would drop dramatically if people could no longer be guaranteed a safeguard from general public opinion.
The idea of having “driver’s lisences” while on the Internet is also a wholly invasive and paranoid notion. The implication behind this suggestion is that not only could you not have pseudonyms, so to speak, but also you would also have to plainly identify yourself while on the Internet, or be forbid from using it. The ease with which Kaspersky’s new Interpol could identify you presents not only a problem for anonymity but is also an invasion of privacy. While the argument can be made that increased surveillance helps catch criminals, I would also argue that it also catches innocent people and makes them feel like criminals for simply doing something controversial or perhaps personally embarrassing; there is legal law, and then there are social mores. Many argue that if you’re acting completely within the law you should have no problem letting the police check you out. But, while I think that some preemptive measures against crime are justified, I would argue that if I’m acting within my legal limits, as it should be assumed that I am, why should the police need to check me out, even at a cursory level.
Further, perhaps it isn’t even so bad that criminals have this anonymity. After all, it seems to me that Mr. Kaspersky might be out of a job without all of these anonymous criminals emailing viruses around.

We Are Anonymous, We Are Legion – by “Scott S”

“We are Anonymous.  We are Legion.  We do not forgive.  We do not forget.  Expect us.”

No longer just an adjective in the English language, “Anonymous” is now also used to refer the countless members of Internet subculture who choose to protect their identities from the world.   Just about everyone has felt the effect of Anonymous’ presence online and off.  From the net’s ubiquitous lolcats that can has cheezburger or Rick Astley at the Macy’s Thanksgiving Day Parade, the mass of nameless, faceless computer users known collectively as Anonymous are responsible for these social phenomenon.  More interestingly, Anonymous has also been responsible for raids, invasions, activism, protests, and public defamation, via both legal and illegal means, with both good and malevolent intentions.

The idea of remaining anonymous on the Internet has been a topic of debate for nearly two decades, but in recent years, imageboards like 4chan and Futaba 2chan alongside wikis and forums like Encyclopedia Dramatica have proven themselves breeding grounds for users with a dangerous sense of unity and ability to willingness to make a difference in the world around them.  Often, Anonymous’ goal is to cause as much mischief as is possible without causing real harm.  Cases include mass [vulgar] posting on sites such as the anti-profanity “No Cussing Club” and inciting fear on Oprah’s talk show.  For the most part, these practical jokes are not much different than your 8th grade friends prank calling the teacher, just on a much larger scale.

But there is great power in numbers. Under the blanket of modern technology, it seems that members of Anonymous consider themselves masked anti-heros.  There are countless examples of how Anonymous has pulled together to cause real harm.  For one, Anonymous was credited with an attack on Vice-Presidential candidate Sarah Palin’s Yahoo email account last year.  After discovering the password, a user posted it along with screenshots of personal photos and emails to 4chan.  Since, they have been posted and re-posted to the point that they are publicly available online and can be found with a simple Google image search.  The culprit, David Kernell a.k.a. Rubico, was easily discovered because of several mistakes he made when bragging to posters on 4chan.

Luckily, Anonymous has shown that power in numbers does not necessarily mean a digital Ku Klux Klan.  In addition to terrorizing Sarah Palin, Anonymous has done some good in bringing awareness to the common man regarding topic the media might otherwise glance over.  Project Chanology, led by members of Anonymous, is a group set out to bring about knowledge of the dangers of The Church of Scientology, a religion formed in the US in 1954.  After an interview with Tom Cruise, an avid Scientologist, leaked onto YouTube in 2008, the Church attempted to remove it claiming it was unfairly edited to misprepresent the religion and was intended only for current members of the Church.  Once 4chan discovered the video, it was immediately reposted and remains available to this day.  Subsequently, Anonymous began to band together in public protest of Scientology and was called to arms via a series of videos.  Perhaps most interesting about these videos is that they alert the public that Anonymous is not an elite group of hackers, but rather your everyday man and woman who choose to take action through the force of a faceless army.

Unfortunately, some members of Anonymous decided to take it upon them to take criminal action in the form of denial of service attacks on Scientology owned sites.  After the case was investigated and some identities were uncovered in October of 2008, Dmitriy Guzner, 18, of Verona, New Jersey pled guilty to computer hacking for his role in the attacks which took place during January of the same year.  According to the information filed in United States District Court in Los Angeles, Guzner participated because he “considered himself a member of an underground group called ‘Anonymous.’”

I personally feel that Anonymous was coming one way or another.  Although the intentions of your average 4chan user might be less than ideal, change comes about when groups of people get passionate about something.  If lolcats are what thousands and thousands of users are passionate about, I can’t complain.  If protests and public awareness are what these people are all about, even better.  Even if it is a waste to see such mass amounts of people passionate about pranking the rest of the world, it is good to know that there are ways for people out there to make a difference because others agree with their ideals.  Although not necessarily attributable to 4chan and Anonymous, the recent Iran election protest was a largely Anon type of digital protest.  When I heard of friends doing their part to wreck Ahmadinejad’s web infrastructure, I was interested to say the least.  If something vile like a rigged election occurs in the United States, it’s basically guaranteed that Anon will be on top of it and that they (we?) have the power to do something about it.  That kind of power and freedom is empowering.

Since joining this course, I have done a fair bit of lurking around Anonymous safe havens.  Most of what I see is creepily dark humor mixed with a lot of pornography, profanity, and an occasional glint of useful, interesting, or heartwarming info.  While doing a little bit of research for this post, I found a rather amazing forum thread regarding Anon.  For some people, Anonymous is not just an adjective or even a mass noun, but a rather a way of life.  With anonymity becoming easier and easier, it’s no wonder that people all around the world are taking part in this underground revolution.  Expect us.

Kyllo: Do guilty people have too many rights? – by “Sabrina A”

In class we talk a lot about safeguarding rights and protecting individuals from the government, but sometimes, these rights go too far. As (generally) liberal college students, it’s easy to talk about principles, but at some point, the pragmatic applications of these principles have to kick in, and if some rights just protect the guilty instead of the innocent, perhaps they shouldn’t be rights at all. Specifically in the case of search and seizure, we want to make sure that the government cannot unnecessarily invade people’s lives. Well, honestly, in reality, it’s probably okay if the government is invading your life, if in the major case, it’s protecting the community’s right to safety. I know it’s very PATRIOT Act, but I don’t think this issue has to polarize down party lines.

Also, we act like we want to make sure that no innocent individual is screwed over in the quest to catch the guilty. I’m not sure that our justice system really works that way, though. We do value efficiency and practicality, even if accuracy in outcomes decreases. For example, we have plea bargains, and we all know that some innocent people do plead guilty. I think we’re OK with this because in most cases, the outcome is correct, and plea bargains serve justice by freeing up the system to prosecute the most important cases. If we didn’t have plea bargains, there would simply be too many cases, and pragmatically, the system would not function.

Basically, I think practicality and efficiency matter more than people give them credit for, and for those reasons, I think the decision in Kyllo v. US was incorrect. In Kyllo, the Supreme Court ruled that the use of a thermal imaging device placed at a public vantage point, used to catch a man who was growing a lot of marijuana in his home, was unconstitutional. There are several reasons why this seems like a bad decision.

First, the search method is narrowly tailored. The thermal imaging device didn’t pick up lots of scandalous details about the man’s life or anything that an innocent person would want to hide. It simply detected enormous amounts of heat. There really aren’t many uses of large amounts of heat that aren’t suspicious. It sounds like the type of device used was advanced enough that it wouldn’t catch cases like the man taking a hot bath – it was specific to uses of so much heat that a policeman should rightfully be suspicious. And the Court ruled that the use of the device itself wasn’t OK, so the Court didn’t just have a problem with the lack of warrant, it seems. It was the use of the device to identify the pot farm, the initial step – even if the police should have gotten a warrant before intervening, the device should have still been permissible.

And more importantly, we can envision a circumstance where technology advances so much that a device could practically be a litmus test of guilt or innocence with minimal physical intrusions: i.e., a device that could detect only whether large amounts of marijuana was being grown in a house. The Court has seemed to uphold that even in these types of cases, people deserve protections; however, if we had a magical machine that would only catch the guilty, we’d probably be OK with it, since it doesn’t take any secrets from the innocent.

Using this device doesn’t hurt innocent people – it doesn’t take secrets from them and doesn’t impose on them. It also doesn’t create a culture of fear and suspicion, which was what the 4th amendment was supposed to avoid. This is an unobtrusive device with a specific use. And we force people to go through metal detectors (we force kids to go to school and juror/witness/defendants to go to court, and we often put metal detectors in those places) without thinking twice. Metal detectors are external technology that catches the guilty without harming innocents. No warrant needed.

Also, the heat was leaked outside of the home. Cases that are analogous in my mind: 1. A cop happens to be walking by a house and smells opium. He should be allowed to intervene. 2. A cop is walking by a home and hears a gunshot, or the sound of someone being assaulted. He should be allowed to intervene. In this case, the cops used technology to enhance their sensory capacities. But why should cops have to hinder themselves? Should we not use technology because the Constitution leaves scientific evolution as a gray area in terms of rights protections? Ultimately, the Constitution is only a broad set of principles, and our understanding of it should evolve with technological progress. Also, Kyllo should have had a reasonable expectation of that type of technology: it doesn’t take much to detect that much heat. I know it’s hard to draw a line ascertaining what type of technology people should “expect,” but I think someone growing a pot farm in his house could expect the heat to be detectable in some way.

I guess one could say that we have to be careful because the police might use this technology against an innocent person. Well…so what? I don’t care if the police measure the heat radiating from my home. We can say that innocent people deserve protections, but this manifestation of the 4th amendment does not help innocents. And the police have no incentive to misuse or overuse the technology – they’re rewarded for catching guilty people, so they have the incentive to go after the people who have actually done something wrong.

Ultimately, it’s cases like Kyllo that make it seem like the guilty have too many protections in our society. Our system is about fairness, but it’s also about catching guilty people, and we have to look at pragmatic outcomes, not just principles, to maintain a balance of rights that best advances our conception of fairness.

Search, Seizure, and Email – by “Benjamin S”

email_marketing_tips

Just last week, District Judge Mosman in Oregon ruled that, according to the language of the SCA, the government has no requirement to notify an email subscriber when their email is searched with warrant.  The story was broken by the Volokh Conspiracy blog, though after some review, it was realized that Judge Mosman’s ruling is not nearly as threatening as at first presumed.  Judge Mosman’s interpretation of the SCA does not obviate the need for a warrant entirely; it simply clarifies that the warrant must be served to the email provider, rather than the subscriber whose material is being searched.  Judge Mosman’s opinion is here.  At the bottom of page 13, Judge Mosman makes a compelling analogy to explain his ruling:

If a suspect leaves private documents at his mother’s house and the police obtain a warrant to search his mother’s house, they need only provide a copy of the warrant and a receipt to the mother, even though she is not the “owner” of the documents.

Now, Judge Mosman is not acting in a particularly high court, nor does he have an inordinate amount of sway in the legal world; one could convincingly argue that the scholars of the Volokh Conspiracy are more influential than he is, and Orin Kerr, writing the blog post, disagrees with him.  But his ruling is troubling.

I also disagree with Judge Mosman.  I think the essential question (as in many cases covered in the reading) over where one can maintain a reasonable expectation of privacy.  When you’ve left important documents at your mother’s house, you have, intentionally or not, left private documents in a location owned and controlled by another person.  A commenter on the blog named Gabriel McCall makes a counter-analogy: “In that case, the bank safe-deposit box analogy comes fully into play. Would anyone argue that warrants and notice for the bank but not the customer would be sufficient to search a box?”  A safe-deposit box is a space in the bank’s keeping, but you maintain control over it; were the bank to access it, they would need to obtain your key or a copy thereof, just as an ISP would need to obtain your password or bypass its protection.

When I think of my email inbox, I think of it as a space belonging to me; it may be stored on someone else’s server, but the distinction is incidental.  The safe-deposit box analogy is an apt one.  Rather than your email being a space on someone else’s system, wholly within their jurisdiction, one can think of it as being more akin to a rented apartment.  You exercise control over the space; in the normal course of living, you and you alone undertake operations in it.  And it is an intensely private portion of your life, where you store, possibly permanently, private correspondence that you would never have released to others (have you ever written a nasty email about someone?).  I maintain that any modern individual would in fact maintain a reasonable expectation of privacy in their email inbox, and as such, the government has an obligation to provide notice to a subscriber upon a warranted search.

To quote Justice Scalia in the Kyllo opinion, “With few exceptions, the question whether a warrantless search of a home is reasonable and hence constitutional must be answered no.”  The principle of skepticism towards warrantless searches is necessary and fundamental to our view of liberty.  The notice doctrine is a small portion of the legal language surrounding unreasonable search and seizure, but a crucial one; at the very least, any citizen is surely entitled to know when government agents have violated their privacy.  And so I think I have to disagree with Judge Mosman on this one; I hope he gets overturned on appeal.