Privacy Matters; will it always? – by “Ben L”

In recent history, we have seen a plethora of companies arise based on the aggregation and selling of personal information. Spokeo, ChoicePoint, Intellius, ZabaSearch, Acxiom are just a few. Spokeo, the most recent one however, provides the most information for free, and the cheapest price if you do decide to pay. The concern is that since all of these sites use essentially the same underlying information, there is no way for the user to prevent dissemination. This has led to a number of cries for congressional restriction. A good start might be to extend the Fair Credit Reporting Act to other kinds of data collection and sale.

In the meantime, what does this mean for society? Are we going to undergo a privacy based cultural revolution? I do not think this will happen anytime soon. Currently, the information available on the websites is horribly inaccurate. Generally, you only know if the person you’ve found is correct based on name and address, and many people are not searchable. Once you have found the correct person, further information is generally not helpful. Spokeo says my father, the only family member who shows up, as having several interests and lifestyle facts, “has children” and “enjoys entertainment.” Now I wonder who doesn’t enjoy entertainment. The rest, while inaccurate, do reveal the potential for extensive information: the only reason I can think of for them to suspect my Dad enjoys home decorating and home improvement is the time we spent remodeling, which was thoroughly not enjoyed by anyone at all. Does that mean that Spokeo has some way of knowing what we are buying? It is not getting Dad’s interesting from linked facebook pages, though I don’t doubt website will soon be mining that, so where is it coming from? Spokeo hasn’t disclosed its sources, so it will be interesting to find out. Spokeo also claims my Dad is not intersted in Politics, when in actuality he votes in and follows every election. He does not run a home business as advertised. There are also personality descriptions like “self-driven,” which, without knowing the sources, and given the general inaccuracy, seem  dubious.

I haven’t found any sites or testimonials claiming these aggregators are particularly useful or accurate. Given that, it seems hard to believe they represent a real disruption. Even if they are somewhat right, what good is that to a stranger, who cannot tell whether a particular fact goes in the wrong category or the right one? It seems to me that if privacy deteriorates for the majority, it will have nothing to do with people invading it, but rather information being freely disclosed, or allowed to be easily accessed. The current generation is growing more accustomed to sharing everything with everyone. This class in general knows more about facebook and internet privacy than a few handfuls of people. Yet how many of us keep facebook? More than that, how many people keep “likeing,” things, and forming public “connections,” which describe ourselves and our tastes? If everyone is going to keep doing this, then one of two things will happen: people will either get used to presenting their “public face,” on the internet, or we will learn not to care so much whether another’s interests (supposed, self disclosed, reported, whatever) disagree with our own. The inaccuracy of current databases will contribute to a distrust and dismissal of information found online. When it does get more accurate, we will be so used to not caring that we won’t start.

There have been a number of criminal investigation based on wrong information, where it is especially dangerous in law enforcement. Governments should not be trusting these sites to do their investigation for them, and rather than pass laws mandating higher accountability the solution is for criminal investigations to be investigatory rather than a matter of buying the information. It is and should be the responsibility of the police to find and apprehend the correct person. If we do pass laws mandating that information databases be more transparent and correct, this will hasten the future in which we are more tolerant. but there is no sign of this happening amongst the current crises.

Lastly, I leave you with an illustrative graphic of the progression in disclosure. How long before other websites share information like facebook does, or are all connected, or these stop being the default settings and become the only one? Will we really give up our social networks?

Playing Clue 2.0: “It was Olivia in the Lab with a Candlestick” becomes “You’re 78% sure that guy in your English class is gay.” – by “Evin M”

If you understand this commercial, you know how to play the game, too.

In the age of Facebook, myriad popular sites offer user-friendly experiences online to willing participants of all demographics. The moderately tech-literate have become habituated to handing over personal information about themselves in exchange for access to internet services. That expository act is rarely mulled over by users, often because the alternative would shut them out from activities which are increasingly becoming socially requisite. Social networking platforms add social value to displaying personal information publicly thereby making it widely available to other users, the platform itself, and any privy third parties. For the most part, these high-profile identification games have stayed in the academic and commercial arenas, fueling research and product advertising. However, in February of 2009, 4chan users demonstrated their similar gumshoe prowess with more benevolent applications. 4channers were outraged by a video of a child abusing his cat, identified him, and contacted local authorities. The suspect was arrested, the feline rescued, and the high-stakes game of Clue formerly played by big buisness and universities yielded unforseen consequences. From these examples, it is easy to extrapolate the more sinister and invasive uses of data mining and brokering–an emerging frontier novel and amorphous enough to evade regulation to date.

Recent data mining experiments have further exposed the prevalence of persistent identity, a consequence of the public’s robustly developed online personalities which has tied real-world individuals to their internet personas more closely than ever before. These involved projects sift through nameless banks of detailed information about users’ internet behavior, using their habits to reverse engineer their real-life counterparts. At MIT, a pair of students facebook stalked over 4,000 profiles, analyzing details like friend circles and tracking identifying traits until they were able to predict whether a guy was gay from the information displayed on his page. Their final product–78% accuracy–is an undeniable indicator of the trail of breadcrumbs that lead from our online identities to our real ones. A similar project allowed Carnegie Mellon researchers to dig up people’s place date of birth and use that information to uncover their Social Security numbers.

Social networking tools have become the most powerful and comprehensive information aggregators ever, encouraging users to submit and disseminate every intimate detail of their lives. Supreme Court Justice Antonin Scalia was taken aback by the extent to which data collection could produce comprehensive infocaches, a gasping response that I found amusing. Specifically dedicated online services, like Netflix, are able to draw accurate, specific conclusions about their users from the information that they inevitably share as part of using the service. Federal discomfort and private litigation brought Netflix’s crowdsourced research into improving the accuracy of their users’ movie tastes to a screeching halt. The F.T.C. and Congress are squirming in their seats, making statements about how third parties, like advertisers, have access to far too much information about internet users’ habits. I find their surprised tone humorous–how did they not see this coming? Their shock has been more jaw-dropping than concrete action, illustrated by the flimsy mitigation suggestions like a “do not track” list similar to the “do not call” list. Wise up, guys. If you share information on the internet, odds are that you’re sharing it with the world. The architecture of the internet won’t allow that. More importantly, the users of the internet won’t allow that. We just need to get a little better at playing this new Clue, and leave fewer identifying footprints in our digital wake if we don’t like the implications of persistent identity.

Stealth Mode is Stupid: Why Your Ideas Don’t Matter – by “Michael W”

It’s a longstanding cliche in the world of tech start-ups. “I’d love to chat about my company, but we’re in stealth mode.” The concern is that sharing the idea is more dangerous than not sharing it. In my experience I have found the exact opposite to be true. Stealth mode is stupid for at least three reasons: 1) ideas are overrated, 2) execution is infinitely more important, and 3) freely sharing ideas can aid in their execution. This is an essential lesson for tech start-ups, but its implications reach far beyond Silicon Valley.

Ideas are Overrated

To start with, ideas are painfully overvalued, both anecdotally — by aspiring entrepreneurs, and formally — by our legal system. Right now thousands of people are contemplating the same, next big idea. But what separates these faceless masses from the one that will emerge as the next Google? In a word, execution. Ideas are everywhere, but great implementation is rare. New entrepreneurs, who have not yet gone through the most critical stage of a young company — its execution — are prone to undervaluing its importance.

The US patent system, meanwhile, similarly overvalues ideas. It protects the expression of ideas that are both “novel” and “non-obvious,” but realistically, in the digital age, for how long do new ideas remain “non-obvious”? In the Twitter age ideas spread nearly instantly. And because of our abundant access to information, in general, the process of trends converging to form new ideas is in plain view for almost anyone to see. Furthermore, the ideas that underly the most successful tech companies of the past decade — Google, YouTube, and Facebook — were neither novel nor non-obvious when they made their marks.

The Story of Facebook

Facebook, in particular, provides an excellent case study. The idea of social networking first emerged in the late 90’s. Live Journal started in 1999; Friendster in 2002; and Tribe.net in 2003. Mark Zuckerberg didn’t launch Facebook until the spring of 2004. At that point it would be unthinkable to label social networking as a new idea. But it was. In fact, two separate groups claimed that Zuckerberg had stolen the idea from them. Facebook had to settle one of the cases out of court (due to pressures stemming from contract law and public relations, not any valid IP concerns), but the very occurrence of the lawsuit, that someone could even think that the idea of social networking was somehow novel or non-obvious in 2003, underscores our societal misunderstanding of ideas.

Why did Facebook garner 400 million users, then, even though it wasn’t a new idea? Because of its execution. It was part luck, part skill, but regardless, it was the actualization of Facebook, not the idea of a social network (or even the idea of a college-centric social network), which created so much value. The same goes for every success story. Search was old news by the time Google entered onto the scene in 1997. But they implemented it much, much better than the competition. Hundreds of streaming video sites were sprouting up in 2004. But YouTube executed the idea better than anyone else.

And why were so many people working on these ideas in the first place? Because there were highly visible trends that were converging to create obvious new opportunities: the growth of the internet made search a necessity; increasing broadband penetration made internet video feasible; and in the wake of the success of the blogosphere, social media was emerging as the next major frontier on the web.

“Ideas are Just a Multiplier of Execution”

As the founder of CD Baby, Derek Sivers, put it, “ideas are just a multiplier of execution.”  He explains that varying degrees of execution are worth roughly between $1 and $10,000,000, but ideas are only worth between negative 1 and 20. Therefore, a weak idea with flawless execution can be worth $10,000,000, but the best idea in the world with poor execution is worth just $20. These numbers are obviously metaphorical proxies, but the concept is spot-on. And Sivers of all people would know: he took a relatively boring idea (selling independently-produced CD’s on the Internet), and turned it into a $20 million company.

If stealth mode was merely unhelpful it would be one thing, but it is actively harmful to new ventures. The people who appear most threatening in the stealth mode worldview — industry peers, talented coders, angel investors, etc. — are actually the people who could provide the most help. By closing themselves off to these potential resources, stealth mode companies are their own worst enemies.

What about Apple?

One common retort to this critique of stealth mode is, “what about Apple?” This of course refers to the fact that Apple, Inc., the fifth largest company in the US, uses intense secrecy as part of their unquestionably successful product development and marketing efforts. The short answer is: you’re not Apple.  They are a thirty-five year-old company with hundreds of retail locations, tens of thousands of employees, and tens of billions of dollars in the bank. Their sophisticated use of secrecy has no bearing whatsoever on a small start-up. [Note: this isn’t to suggest that Apple has a healthy attitude towards intellectual property, because I don’t think they do, but that is for a different blog post.]

Fear of Sharing: Broader Implications

The concept that overprotecting ideas can actively hurt companies is something that applies to all firms, not just start-ups. Media conglomerates, for instance, closely guard their content, because, like rookie entrepreneurs, they think not sharing it is less dangerous than sharing it. But they’re wrong.

This mistake is perhaps best illustrated by the band Ok Go, whose lead singer wrote a scathing op-ed in the NY Times this past weekend, which chronicled his band’s tumultuous experience with a major record label. Ok Go was signed by EMI in 2000. They floundered for years, until in 2005 the band used their own funds to make a low-budget music video — without the aid nor the permission of their label — that went on to become a YouTube sensation. The label, though, viewed the video as illegal, despite the fact that it singlehandedly propelled the band to international stardom, resulted in millions of legally sold records (most of the profits of which went to the label), and even earned the band a Grammy. Recently EMI disabled embedding on this video so that it can no longer be shared across the Internet, even in light of how it being shared in the first place is precisely what proved to be such a boon for the band and the label. Consequently, EMI is preventing the next Ok Go from ever emerging. Consumers lose, bands lose, and EMI loses. Why are they doing it? It’s really unclear.

Conclusion

Whether you’re a lone hacker or a Fortune 500 media company: your ideas don’t really matter. So stop trying to protect them, and start trying to implement them better.

Insurance hikes, privacy risks, for social media users – by “Jacob A”

The Huffington Post recently reported on an prediction made by the website confused.com, which helps insurance payers navigate and compare different rates, about a probable rise in insurance premiums for social media users. Why would social media users see home insurance hikes? Because the status-updates and other information they furnish on the social media services they use, such as Twitter or Foursquare, alert burglars as to when they’re home, and when they’re not.

http://www.csmonitor.com/var/ezflow_site/storage/images/media/images/0217_pleaserobme/7409270-1-eng-US/0217_PleaseRobMe_full_380.jpg

Tweeting something as benign as “great tilapia tacos @ Drew’s Taco Shack” is potentially unsafe as it alerts burglars to the fact that whoever is currently eating a taco with Drew is also not home. A new wesbite, pleaserobme.com, hopes to increase awareness about the dangers of publicly providing too much information, so it collects tweets and Facebook status updates and displays them to the world for anyone to see.

Pleaserobme.com means to make people realize the dangers of constantly updating and disseminating their location at all times, but it does so by letting robbers know when you’re not home, which is, although an admittedly pretty funny way of getting thoughtless social media users to think twice before tweeting “I’m I’m at Cali Yogurt,” also a lawsuit waiting to happen.

That said, it really is easy enough to find out where a sizeable chunk of the population lives by using Google’s phone number look up on a number or address. Consider also the enormous amounts of information provided by Google Earth or Streetview, and the extent to which digital technologies empower house robbers (or identity-thieves or other poorly intentioned individuals) becomes abundantly clear.

But even admitting that “criminals are becoming increasingly sophisticated in their information gathering… to plan their burglaries with military precision,” as Darren Black, the head of home insurance at confused.com, has pointed out – does this justify insurance hikes? What standards of burglar-sensitive stupidity (e.g. “Oh no! In Mexico for three days & think I forgot to turn the heat off at home!”) will insurance providers use? How does one gauge burglary (or other) risks from a tweet or Facebook status update?  And isn’t the very purpose and function of social media to disseminate opinions, constantly updated personal information, microstatements about daily life and wherabouts? Insurance hikes might make sense if there is an actual increase in risk because of social media use, but they also go against the nature of these services. The Huffington Post article refers to a news clipping from 1983 warning telephone users about the dangers of voicemail. “If you have an answering machine that tells callers you are not at home it could alert potential burglars, advises Family Circle magazine.”

http://images.huffingtonpost.com/gen/143131/PHONE-BURGLARS.jpg

Isn’t the fear over the disclosure of too much information via Twitter et. al. unsubstantiated, given the fact that a great portion of tweets are sent via mobile (and hence out of the home) anyway? Wouldn’t home insurance hikes for social media users be just as silly as if they were applied to phone-owners who didn’t change their voicemail?

But the kinds of information we propagate online through 4square and Facebook and Twitter also point to the kind of information ecology we would like to live in. Sure, it’s easy enough to make your entire Facebook page private – but was privacy ever the point of social media? Is privacy, as Mark Zuckerberg (in)famously recently stated, “no longer a social norm.”

But, then, where does one draw the line between stupidity and paranoia? There are undeniable dangers to giving away too much of our privacy, but what might those be? Is it more reasonable to be worried about burglars robbing your home, or about the larger privacy or security implications of geolocative (social) media? Shouldn’t we be more concerned about national security compromises that arise when the heads of national intelligence disclose too much information online? For the average person, disclosing too much information may not be a security risk, but it certainly remains a privacy risk. Forget robbers – what about data trawllers, or hostile intelligence networks, or government agencies, or corporate interests, who amass our geolocative (and all our other) social media information?

By default, Facebook makes you publically searchable by everyone, and publically visible by everyone in your networks. Default settings go more often unchanged then not. Privacy is not a default setting. The question is whether it still remains a social standard.

Social media is still a new technology. It will have direct implications on things such as hikes in home insurance premiums, as well as much larger cultural consequences. Is a social media universe where it is considered unsafe to post birthdates, pets names, phone numbers, photos a friendly one? A social one? We may have to value privacy and friendliness against each other.

ACTA: Globalizing the DMCA – by “Elie C”

Arrr, YouTube! Where be my videos??

Although the public has been denied access to negotiations (note: RIAA and MPAA don’t count as ‘the public’), drafts of the ACTA’s internet enforcement section leaked online last week, providing a chilling glimpse into the covert negotiations among world leaders and the possible future of global internet policing. Introduced by the US Administration in 2007, the Anti-Counterfeiting Trade Agreement (ACTA) proposes making ISPs liable for content that subscribers transfer using their networks, forcing ISPs to “operate “automatic technical processes” to detect copyright-infringing activities.” While section 512 of the DMCA already establishes third party liability in the US, the ACTA extends the liability of intermediaries beyond notice-and-takedown to possible Deep Packet Inspection (bye bye net neutrality?) to the contentious three-strikes rule (which France passed in 2009, banning three-time accused file-sharers from the internet),  thus exacerbating and spreading shortcomings of the DMCA internationally – namely the E.U., Canada, Mexico, Australia, New Zealand, South Korea, Singapore, Jordan, Morocco and the United Arab Emirates.

The current digital gatekeeping model established by the DMCA promotes a ‘shoot now, ask questions later’ approach to handling allegations of copyright infringement. While the DMCA imposes the burden of proof on copyright holders and outlines the necessary elements to a notification of copyright infringement, the court found in ALS Scan, Inc. v. Remarq Communities, Inc. that copyright owners do not have to identify all infringing material (“imperfect notice”), thus shifting this responsibility to service providers. Although the complaint of infringement does not prove that infringement took place, the DMCA allows ISPs to takedown content without investigating whether the material was truly infringing before taking it down, thereby shifting the burden of proof onto subscribers.

The only recourse that subscribers have is filing a counter-notice of a “good faith belief that the material was removed or disabled as a result of mistake or misidentification…” Intermediaries risk losing their safe harbor if they do not take down infringing works whereas the consequences of wrongful takedowns amount to little more than disgruntled bloggers. In practice, there is no enforcement of fair use considerations prior to takedowns; section 512 of the DMCA explicitly imposes burden of proof on copyright holders, but third party provisions ultimately shifts the burden to subscribers.

Need to brush up on copyright law? Check out this video before it’s taken down…

As the amount of information online far exceeds copyright holders’ ability to monitor the dissemination of their works, they are becoming increasingly reliant on targeting internet hubs managed by intermediaries. In turn, intermediaries like YouTube are going “well above and beyond our legal responsibilities” by turning to automated technologies to keep up with this inundation of user-generated and uploaded content; its ContentID system enables copyright owners to automatically identify their works in YouTube hosted videos, and subsequently monetize, track or block the content. Technology that automatically screens for the presence of copyrighted material inevitably steamrolls over fair uses in favor of ease for copyright holders and intermediaries. As a standard rather than rule, fair use cannot be identified by an algorithm (real lawyers have enough trouble accomplishing that as it is) and technologies such as ContentID will further contribute to mass takedowns, leaving subscribers guilty until they prove themselves innocent.

Can the Government Really Save You From Yourself? – by “Thad D”


Court Photo
The Plaintiffs Appear in Court, Represented by the ACLU

On January 15th, 2010, the case of Miller v. Skumanick continued in the Third Circuit Court of Appeals.  For those of you unfamiliar with the case, in Pennsylvania, Tunkhannock School District officials discovered cell phone photos of high school girls posing “provocatively” in late October 2008.  The two photos discussed in Miller’s suit involve one depicting a teenager with a towel wrapped around her waist and her breasts exposed (like she just took a shower), and another depicting two girls in training bras making “peace signs” for the camera.  School officials handed the cell phones over to the office of the District Attorney, George Skumanick, who decided to meet with all students involved and offer them a deal: either they would take an education course or face prosecution on charges of distributing child pornography.  Parents would have to pay the course enrollment fee and the course itself would discuss, among other things, “what it meant to be a girl in today’s society” and an essay each student would have to write describing why what they did was wrong.

[Notice that, in last month’s proceedings, the defense refers to the admission of guilt as having to write an essay about why it was “unwise to send those photographs” – the word ‘wrong’ being intentionally omitted].

At any rate, the plaintiffs, feeling that their constitutional rights were being infringed upon, filed an injunction to prevent District Attorney Skumanick from taking prosecutorial action.

Although this case touches on a host of today’s hot issues, what is interesting to note is the concept consistently raised by the defense that Skumanick’s office was “protecting the children from themselves”.  This post is a mixture between a response to and an analysis of an article from Reason.com (aptly titled “Ruining Kids in Order to Save Them”).

So, what do the defendants mean in describing their prosecutorial discretion as ‘saving the kids from themselves’?  In this, the defense is very clear: “Children are immature, children are vulnerable.  The entire basis of the juvenile code is to protect children from themselves…When kids disseminate pictures of themselves through the Internet they are putting themselves and other children at risk.” (Miller – Oral Arguments, pp. 15-16).

The defense is adamant and clear in its stance.  What is more important though is to ask whether or not this line of thinking applicable.  What are we really trying to accomplish in doing this?  Certainly, children who are abused and exploited become victims the moment the action takes place.  Disseminating images of this child pornography does create a huge problem in our society, and provokes and encourages this socially unacceptable behavior.

But what if a teenager posted a picture to Facebook of herself posing in a bikini, or himself not wearing a shirt on the beach, or some other scenario where the youth was, simply put, not dressed in accordance with Sharia law.  We may laugh at the ridiculousness of calling such a photo pornography, but I should remind you that two of the girls facing prosecution had opaque training bras on, and no genitalia or breasts exposed, in the “pornographic” photo.

From the reaction of the girls in this case, it’s clear to see that they felt no more victimized by these photos than if their parents had taken pictures of them on the beach in two piece bathing suits.  In fact, District Attorney, in his meeting with the children, had commented that he could prosecute teenagers who appeared in pictures wearing bikinis, based on his enforcement of the Pennsylvania statute on child pornography.

Given what I’ve said, it’s easy to dismiss Mr. Skumanick as a reactionary zealot, and these photos as incredibly innocent and, more importantly, victimless.  But, perhaps the plaintiffs are truly too immature, and unable to understand whether or not they are being victimized.  Whether one buys the argument that until one is older than 18 years old they are not mature enough to make serious decisions is an entirely different issue, one too lengthy to delve into here.

However, if these children are indeed too immature, are their parents wrong in deciding that the photos of their children are not obscene?  Do public officials have the right to enforce their judgment over the head of parents? And if so, where do we draw the line on what issues those officials have the right to meddle in?

In this case, at the very least, it would seem that the prosecution is making a mountain out of a molehill.  And honestly, since Skumanick pursued prosecution against the teenagers at the beginning of an election year, it’s pretty transparent to see what he was trying to accomplish (Hint: it rhymes with “me-election”).  However, in noting all the shortcomings of our legal system, and the loopholes that a few avaricious public officials will claw their way through to assume the national spotlight, we should reflect on the importance of the laws that occasionally harm our citizens.

Stockholm syndrome, a serious psychological effect, occurs when a hostage falls in love with the person who has captured them.  Related, but slightly different, is when a victim falls in love with their tormenter.   Hypothetically, if a child had had photos taken of them by a parent or guardian where they appeared fully clothed, but in provocative poses, and had willingly disseminated those photos at the request of the parent or guardian (because they still loved and obeyed him or her), the state would have no recourse against said parent.  Although contrived, the case still applies; as in the scenario no physical abuse would take place, but inappropriate photos of a juvenile would be distributed, and the crime would not be “victimless”.

The issue of government intervention in parenting has always been contentious.  But as kids get access to newer, more powerful technological tools at earlier and earlier stages of life, the issue of morality, parenting, and technology will more often rear its ugly head.

Google and its endless list of privacy issues – by “Aditya K”

(There should be a video below. For some reason, embedding it isn’t working. Hopefully it’ll be fixed soon. In the meantime, click here to watch it!)

“At Google, we make privacy a priority in everything we do.” Except when it comes to rushing to launch a new product.

Not too long ago, Google launched a new product called Buzz, as I’m sure you all know. If Twitter and Flickr and WordPress and maybe even Facebook had a lovechild that was raised by Google (custody issues…), you’d have Buzz. As of now, Buzz is kind of a mess. People have the typical knee-jerk “I’m-going-to-hate-this-because-it’s-new” reaction. Most people who use it are simply having their Twitter posts feed directly into it. People are using it as kind of a public-pseudo-Facebook wall, amassing people who don’t know each other into conversations that are not related to the original post (or the original poster). Eh hem. Case in point. (There are potential privacy issues in that too, although many are brought upon oneself—exposing email addresses, sensitive information in a public setting, etc.)

When Google unveiled this new feature (on everybody at once), they overlooked a pressing privacy issue. Essentially, the service made you automatically follow your most frequent contacts, and vice versa. People had access to your feed and information without your consent. This leads to instances like this one (via Techcrunch):

I use my private Gmail account to email my boyfriend and my mother.

There’s a BIG drop-off between them and my other “most frequent” contacts.

You know who my third most frequent contact is?

My abusive ex-husband.

Which is why it’s SO EXCITING, Google, that you AUTOMATICALLY allowed all my most frequent contacts access to my Reader, including all the comments I’ve made on Reader items, usually shared with my boyfriend, who I had NO REASON to hide my current location or workplace from, and never did.

This privacy breach led to folks being able to determine who their friends, employees, spouses, and more were contacting frequently. As Nicholas Carson of Business Insider discovered, Buzz could expose marital infidelity, anonymous sources, private emails in the workplace, and more. The defaults guessed at who you wanted to expose your information to; Google assumed it knew your friend circle. It was even difficult/impossible to opt out.

Even though Google has fixed most of these issues, it was still a case of awful foresight on their part. Computers are getting smarter and are perhaps decent at choosing who your friends are or who you’re most interested in, but even if this technology is flawless, making these lists public and available to those around you is just stupid. Gmail, which Buzz is loosely connected to, is often what people use for private communication. By tacking on this very public feature, without allowing people to opt in, Google crossed some lines that should not have been crossed.

And now for a loose but related connection:

James Grimmelmann, professor at New York Law School and all-out smart guy, posted an awesome recap/analysis of the Google Books Settlement fairness hearing today, where a bunch of parties presented their cases to the judge either supporting or opposing the settlement. Many of the arguments raised against the settlement (from the EFF and EPIC and other digital rights groups) dealt with privacy. Google Books would in essence have access to a large corpus of data—”It can track not just what books you read, but which pages, and what you scribble in the margins”—that would perhaps allow them to tailor ads or sales to an unheard of degree. This would also bring up questions of law enforcement and how this data would be handled by a single entity.

Privacy issues will always be a concern, especially when it comes to Google, but there is perhaps good reason why it should be a top concern. Hadrian Katz, who spoke for the Internet Archive against the settlement, ended his argument with this: Google claims to have taken privacy into account, but perhaps the recent Buzz fiasco is a good example of how seriously Google actually takes the issue.

Update: Technology Review has a cool article on how Google Buzz has changed since its release. Check out the multimedia timeline too!