Much has been written recently about Facebook’s new facial recognition feature. By many accounts, the feature was rolled out responsibly, even though some reflexively lodge privacy charges over most any new Facebook feature. From all appearances, the Facebook facial recognition feature is restricted to helping you tag your friends in your pictures — more convenient than creepy.
So, what’s fueling the outrage? I think it’s another symptom of privacy vertigo which I discussed recently at the Privacy, Identity, and Innovation conference and at Drexel University. We’re understandably uneasy about technology that pops the anonymity bubble we’ve been living in over the past century — an anonymity bubble that’s been inflating along with the population density of our cities. Social media and big data are returning our dense, anonymous cities back to small towns. And it’s freaking us out.
But, this is not new territory. As I discussed in February (Did the Internet Kill Privacy? Insert Bad Policy Here), we need to regulate how public data is used, not whether it’s available. From a recent post, Tim O’Reilly sounds a similar note:
We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.
Tim makes the analogy to the criminal use of insider information when trading stock. It’s not illegal to have the information. It is illegal to trade on it.
Similarly, Section 604 of the Fair Credit Reporting Act (FCRA) restricts the use of credit report information to clearly defined permissible purposes like credit, insurance, and employment. The FCRA doesn’t restrict the availability of the data but strictly restricts the use of the data.
Humanity has a long history of adjusting social behavior to respect privacy when faced with too much public information. At the recent Personal Democracy Forum, danah boyd (she prefers the cummingsian capitalization) gives several examples from locker room culture, teens hiding in public view of their parents, and New Yorkers ignoring you until you need their help.
Sure, new technology has a history of freaking us out for many reasons, and it certainly can be used for good or ill. I’d argue that some of the privacy conservatism expressed in the European Union is steeped in their memory of fascist abuses, and the fear that tyrannical governments will again return to terrorize their citizenry with public data — again, data abuse not data access. In the US, we have constitutional protections restricting our government from abusing information to exploit the citizenry (not that we always get it right).
We humans are social, clever creatures — now engaged in an innovative era where age-old social traditions are being mapped to a revolution in online media and big data. Facebook is the latest dominant player to drive this innovative era with features like facial recognition. We owe it to ourselves to criticize these innovators when they get it wrong and praise them when they get it right. And, like the FCRA and insider trading laws, regulate the abuses.