With increasing volume, “creepy” has snuck its way into the privacy lexicon and become a mainstay in conversations around online sharing and social networking. How is it possible that we use the same word to describe Frankenstein and Facebook?
In November, I gave a LinkedIn Tech Talk (video, slides) where I shared my thoughts on the impact of privacy vertigo to both government regulation and societal norms. Thanks to LinkedIn’s Daniel Tunkelang who posted a thoughtful review of the talk.
I use the term privacy vertigo to characterize the queasiness we feel over the rupture of our anonymity bubble — a bubble that has grown along with the US population density over the past 150 years.
How’d we get here? In the days of rural small towns, when everyone knew everyone’s business, anonymity was low and so, too, were privacy expectations. Our privacy expectations have grown along with dense urban cities. But what goes up has abruptly come down amidst the sudden mass adoption of social media.
This free-fall is the result of downward pressure from social media and big data that’s returning our dense, anonymous cities back to small towns. And without reasonable and agreed upon expectations for what’s public and private, we have a privacy vertigo pandemic on our hands. And under privacy vertigo, lots of stuff seems creepy.
To cope, I’m suggesting we reframe our thinking about privacy around the places, the players, and the perils of social media and data uses:
- The Places: Are we operating in public or private? Some places, like your home and body are clearly private and protected by law. Your car is currently a gray area, at least until US v. Jones is decided by the US Supreme Court.
- The Players: Who are the players and how much relative power do they have in their relationship? I’ve created this infographic to show how privacy rights should grow with the power disparity of the players involved.
- The Perils: Regardless of the availability of data, we should be concerned with how that data might be used against us. I’ll be exploring the benefits and perils of data use at Strata Conference 2012 with NYU’s Solon Barocas and O’Reilly’s Alex Howard. Update: Here’s a recap of our Strata Session: Is Privacy a Big Data Prison?
This places-players-perils framework is the start of a toolset to determine when a data practice is truly creepy. By the way, I’ve previously used spaces-players-consequences to describe the same thing, but I’ve come to like this places-players-perils alliteration better (hey, I’m making this up as I go, so feel free to suggest something else).
Here’s a couple of examples of how this framework might be used:
- News of the World Phone Hacking Scandal: To fuel its news scoops, this 168 year-old UK newspaper was accused of hacking into the phones of citizens, celebrities, and even government officials. Phone and voicemail messages are clearly private places, resulting in sometimes dire perils if breached. But what sets this case apart is the abuse of power by the press — the Fourth Estate endowed with nearly governmental power. With great power comes great responsibility and News of the World grossly abandoned theirs.
- Carrier IQ Phone Rootkit Discovery: The company was sniffing keystrokes and other information off cell phones to fulfill contract services to their carrier customers. As with with News of the World case, phones are private places where high powered players (like governments) could snoop. Certainly, the PR could have been handled better, but the customer data use was in accordance with delivering your cell phone service. As Forbes’ Kashmir Hill rightly points out, Carrier IQ is Not Evil.
Verdict: Not Creepy.
Privacy concerns boil over when there’s confusion or conspiracy (which are often indistinguishable) among any of these axes — whether we think we’re in private when we’re not; when we’re dealing with powerful players; and/or when the perils could be dire.
Recently, the most common problem has been the shifting sharing line that divides public places from private. Jeff Jarvis recently argued in his book, Public Parts, that we should make our own decisions around the risks and rewards of being more public or being public at all (BTW, you can blame Jarvis for sensitizing me to this catch-all use of “creepy”). Gordon Crovitz’s review of Public Parts sums it up well:
Privacy is notoriously difficult to define legally. Mr. Jarvis says we should think about privacy as a matter of ethics instead. We should respect what others intend to keep private, but publicness reflects the choices “made by the creator of one’s own information.”
I agree, but think this privacy ethic needs an actionable framework, like places-players-perils, where we can responsibly innovate, cope with privacy vertigo, and call out truly creepy doers.
Update: Thanks to Paul Corriveau for an excellent comment on changing pitfalls to perils in the framework. Not only is perils easier on the tongue, but it’s a more accurate description of the dire consequences that can come from privacy abuses.