The Real Privacy Paradox

ftc logoTwo stories in the news today about online privacy suggest a paradox about user attitudes. But not the one everyone always talks about, in increasingly urgent terms.

One story from CNET’s Don Reisinger reports on a study conducted by an Australian security firm. The company created two phony Facebook users and tried to “friend” 100 random Facebook users. Forty-one to 46% of the users “blindly accepted” (to quote the firm) the requests, giving the fake users access to the users’ birth date, email address, and other personal information.

“This is worrying,” the company’s blog reported, “because these details make an excellent starting point for scammers and social engineers.”

The other story, reported by the New York Times’ Stephanie Clifford, involves the raucous start today of a Federal Trade Commission conference on privacy and technology. The conference began with a full day of anxious hand-wringing. Quotes from two academics caught my eye. Penn’s Joseph Turow told a panel “Generally speaking, [consumers] know very very little about what goes on online, under the screen, under the hood. The kinds of things they don’t know would surprise many people around here,” he said.

Then there were even more ominous words from Columbia’s Alan Westin. Speaking of the relationship between users of free services from Yahoo, Google, Facebook, Twitter and other Internet giants in which access to information (and therefore to targeted advertising) is a pre-condition to use “free” services, Westin reported “that bargain is now long gone, and people are not willing to trade privacy for the freebies on the Internet.”

As I write in Law Two of The Laws of Disruption (“Personal Information”), researchers, advocacy groups and their colleagues in the mainstream media have for years been describing what they call “the privacy paradox.” User surveys consistently find that consumers are concerned (even “very concerned”) about their privacy online, and yet do nothing to protect it. They don’t read privacy policies, they don’t protect their information even when given the tools to do so, and they merrily click on targeted advertisements and even buy things that online merchants deduce they might want to buy.

Oh, the humanity.

I see no paradox here. Much of the research conducted about consumer concerns over privacy is of extremely poor quality—surveys or experiments conducted by interested parties (security companies) or legal scholars with little to no appreciation for the science of polling. Of course consumers are concerned about privacy and are uncomfortable with concepts like “behavioral” or “targeted” advertising. No one ever asks if they understand what those terms really mean, or if they’d be willing to give up free services to avoid them. And consumers when they’re being surveyed are very likely to think differently about their “attitudes” than when they are busily transacting and navigating their information pathways.

What, for example, is the basis for Prof. Westin’s claim that people are no longer willing to make the trade of information for service? The 350,000,000 users now reported by Facebook, perhaps, or the zillion Tweets a day?

And where does the Australian security firm get the idea that scammers are sophisticated enough to use birthdates and other personal data to fashion personalized scams? The completely unspecific Nigerian variations seem to work just fine, thank you. How’s this for a series of non sequiturs, again from the Australian experimenters: “10 years ago, getting access to this sort of detail would probably have taken a con-artist or an identity thief several weeks, and have required the on-the-spot services of a private investigator.”

Huh? To get someone’s email address, birthday, and the name of the city they lived in? Most of that data is freely accessible in public records. Yes, even in the innocent by-gone days of ten years ago.

The real paradox—and a dangerous one at that–is between the imminent privacy apocalypse preached with increased hysteria by a coalition of legal scholars, security companies, journalists and a small fringe of paranoid privacy crazies (not necessarily separate groups, by the way) and the reality of a much more modest set of problems which for most users present little to no problem at all. Which is to say, as CNET’s Matt Asay put it, “It’s not that we don’t value our privacy. It’s just that in many contexts, we value other things as much or more. We weigh the risks versus the benefits, and often the benefits trump the privacy risks.”

That is not to say there is no privacy problem. It is a brave new world, where new applications create startling new ways of interacting, not all of them pleasant or instantly comfortable. Consider some recent examples:

    – Photo applications can now use pattern matching algorithms to take “tagged” faces from one set of photos and find matches across their very large dataset.
    – Facebook is in the process of settling a series of lawsuits over its ill-fated Beacon service, which reported back to Facebook actions taken by Facebook users elsewhere in the Infoverse for posting on their Facebook pages.
    – A recent survey found that a significant number of companies have not made compliance with the Payment Card Industry’s Data Security Standard a priority.
    – Loopt, which makes use of GPS data to tell cell phone users where their friends are, introduced a new service, Pulse, to provide real-time information about businesses and services based on a user’s physical location.
    – The EU recently adopted stricter rules requiring affirmative opt-in for cookies.

    What these and other examples suggest is that, as so often happens, the capacity for information technology to connect the dots in interesting and potentially valuable (and potentially embarrassing) ways regularly outpaces our ability to adjust to the possibility.  It is only after the fact that we can decide if, how, and when we want to take advantage of these tools.

    There are real privacy issues to be considered, but they are far more subtle and far more ambiguous than the frenzied attendees of the FTC’s conference would have us—or themselves, more likely—believe.

    It’s not, in other words, like we need to militarize consumers to reflect their privacy “attitudes” in their doggedly contrary online behavior. Rather, we need to study the behavior, as only a few researchers (notably UC Riverside marketing professors Donna Hoffman and Tom Novak) actually bother to do. It is, after all, much easier to design self-congratulatory surveys and pontificate abstract privacy theory than it is to study consumer behavior in large-scale. (More fun, too.)

    Until we can begin to talk sanely and sensibly about the costs and benefits of information generation, collection, and use, regulators are well-advised to do very little by way of remedies for the wrong set of problems. (So far, the FTC and other U.S. agencies have, thankfully, done very little privacy legislating and rulemaking.) Businesses would be smart to adopt information security practices that should have been standard a generation ago, and educate their customers about their commitment to doing so.

    As for consumers—well, consumers will do what they always do—vote with their wallets.

    And please, pay no attention to the frantic man behind the screen. Even if he insists on giving you his name, email address, and, heaven forbid, his birthday.