Category Archives: Privacy

The Other Side of Privacy

After attending last week’s Federal Trade Commission online privacy roundtable, I struggled for several days to make some sense out of my notes and my own response to calls for new legislation to protect consumer privacy. The result was a 5,000 word article—too long for nearly anyone to read. More on that later.

Even as the issue of privacy continues to confound much brighter people than me, however, the related problem of securing the Internet has also been getting a great deal of attention. This is in part due to the widely-reported announcement from Google that its servers and the Gmail accounts of Chinese dissidents had been hacked, leading the company to threaten to leave China altogether if its government continues to censor search results.

Both John Markoff of the New York Times and Declan McCullagh of CBS Interactive have also been back on the beat, publishing some important stories on the state of American preparedness for cyberattacks (not well prepared, they conclude) and on the continued tension between privacy and law enforcement. See in particular Markoff’s stories on Jan. 26 and on Feb. 4th and McCullagh’s post on Feb. 3.

Markoff reports a consensus view that the U.S. does not have adequate defensive and deterrent capabilities to protect government and critical infrastructure from cyberattacks. Even worse, after years of effort and studies, the author of the most recent effort to craft a national strategy told him “We didn’t even come close.”

Markoff reports that Google has now asked the National Security Agency to investigate the attacks that led to its China announcement and the subsequent exchange of hostile diplomacy between the U.S. and China. Dennis C. Blair, director of the Office of National Intelligence, told Congress earlier this week that “Sensitive information is stolen every daily from both government and private-sector networks….”

That finding seems to be buttressed by findings in a new study sponsored by McAfee. As Elinor Mills of CNET reported, 90% of survey respondents from critical infrastructure providers in 14 countries acknowledged that their enterprises had been the victim of some kind of malware. Over 50% had experienced denial of service attacks.

These attacks and the lack of adequate defenses are leading companies and law enforcement agencies to work more closely, if only after the fact. But privacy advocates, including the Electronic Frontier Foundation and the Electronic Privacy Information Center, are concerned about increasingly cozy relations between major Internet service providers and law enforcement agencies including the NSA.

They are likely to become apoplectic, however, when they read McCullagh’s post. He reports that a federal task force is about to release survey results that suggest law enforcement agencies would like an easier interface to request customer data from cell phone carriers and rules that would require Internet companies to retain user data “for up to five years.” The interface would replace the time-consuming and expense paper warrant processes now necessary for investigators to gain access to customer records.

Privacy advocates and law enforcement agencies are simply arguing past each other, with Internet companies trapped in the middle. Unmentioned at the FTC hearing—largely because law enforcement is out of the scope of the agency’s jurisdiction—is the legal whipsaw that Internet companies are currently facing. On the one hand, privacy and consumer regulators in the U.S., Europe and elsewhere are demanding that information collectors, including communications providers, search engines and social networking sites, purge personally-identifiable user data from their servers within 12 or even 6 months.

At the same time, law enforcement agencies of the very same governments are asking the same providers to retain the very same data in the interest of criminal investigations. Frank Kardasz, who conducted the law enforcement survey, wrote in 2009 that ISPs who do not keep records long enough “are the unwitting facilitators of Internet crimes against children.” Kardazs wants laws that “mandate data preservation and reporting,” perhaps as long as five years.

ISPs and other Internet companies are caught between a rock and a hard place. If they retain user data they are accused of violating the privacy interests of their consumers. If they purge it, they are accused of facilitating the worst kinds of crime. This privacy/security schizophrenia has led leading Internet companies to the unusual position of asking for new regulations, if only to make clear what it is governments want them to do.

The conflict becomes clear just by considering one lurid example (the favorite variety of privacy advocates on both sides) that was raised repeatedly at the FTC hearing last week. As long as service providers retain data, the audience was told, there is the potential for the perpetrators of domestic violence to piece together bits and pieces of that information to locate and continue to terrorize their victims. Complete anonymization and deletion, therefore, must be mandated.

But turn the same example around and you reach the opposite conclusion. While the victim of the crime is best protected by purging, capturing and prosecuting the perpetrator is easiest when all the information about his or her activities has been preserved. Permanent retention, therefore, must be mandated.

This paradox would be easily resolved, of course, if we knew in advance who was the victim and who was the perpetrator. But what to do in the real world?

For the most part, these and other sticky privacy-related problems are avoided by compartmentalizing the conversation—that is, by talking only about victims or only about perpetrators. As Homer Simpson once said, it’s easy to criticize, and fun too.

Unfortunately it doesn’t solve any problem, nor does it advance the discussion.

The Real Privacy Paradox

ftc logoTwo stories in the news today about online privacy suggest a paradox about user attitudes. But not the one everyone always talks about, in increasingly urgent terms.

One story from CNET’s Don Reisinger reports on a study conducted by an Australian security firm. The company created two phony Facebook users and tried to “friend” 100 random Facebook users. Forty-one to 46% of the users “blindly accepted” (to quote the firm) the requests, giving the fake users access to the users’ birth date, email address, and other personal information.

“This is worrying,” the company’s blog reported, “because these details make an excellent starting point for scammers and social engineers.”

The other story, reported by the New York Times’ Stephanie Clifford, involves the raucous start today of a Federal Trade Commission conference on privacy and technology. The conference began with a full day of anxious hand-wringing. Quotes from two academics caught my eye. Penn’s Joseph Turow told a panel “Generally speaking, [consumers] know very very little about what goes on online, under the screen, under the hood. The kinds of things they don’t know would surprise many people around here,” he said.

Then there were even more ominous words from Columbia’s Alan Westin. Speaking of the relationship between users of free services from Yahoo, Google, Facebook, Twitter and other Internet giants in which access to information (and therefore to targeted advertising) is a pre-condition to use “free” services, Westin reported “that bargain is now long gone, and people are not willing to trade privacy for the freebies on the Internet.”

As I write in Law Two of The Laws of Disruption (“Personal Information”), researchers, advocacy groups and their colleagues in the mainstream media have for years been describing what they call “the privacy paradox.” User surveys consistently find that consumers are concerned (even “very concerned”) about their privacy online, and yet do nothing to protect it. They don’t read privacy policies, they don’t protect their information even when given the tools to do so, and they merrily click on targeted advertisements and even buy things that online merchants deduce they might want to buy.

Oh, the humanity.

I see no paradox here. Much of the research conducted about consumer concerns over privacy is of extremely poor quality—surveys or experiments conducted by interested parties (security companies) or legal scholars with little to no appreciation for the science of polling. Of course consumers are concerned about privacy and are uncomfortable with concepts like “behavioral” or “targeted” advertising. No one ever asks if they understand what those terms really mean, or if they’d be willing to give up free services to avoid them. And consumers when they’re being surveyed are very likely to think differently about their “attitudes” than when they are busily transacting and navigating their information pathways.

What, for example, is the basis for Prof. Westin’s claim that people are no longer willing to make the trade of information for service? The 350,000,000 users now reported by Facebook, perhaps, or the zillion Tweets a day?

And where does the Australian security firm get the idea that scammers are sophisticated enough to use birthdates and other personal data to fashion personalized scams? The completely unspecific Nigerian variations seem to work just fine, thank you. How’s this for a series of non sequiturs, again from the Australian experimenters: “10 years ago, getting access to this sort of detail would probably have taken a con-artist or an identity thief several weeks, and have required the on-the-spot services of a private investigator.”

Huh? To get someone’s email address, birthday, and the name of the city they lived in? Most of that data is freely accessible in public records. Yes, even in the innocent by-gone days of ten years ago.

The real paradox—and a dangerous one at that–is between the imminent privacy apocalypse preached with increased hysteria by a coalition of legal scholars, security companies, journalists and a small fringe of paranoid privacy crazies (not necessarily separate groups, by the way) and the reality of a much more modest set of problems which for most users present little to no problem at all. Which is to say, as CNET’s Matt Asay put it, “It’s not that we don’t value our privacy. It’s just that in many contexts, we value other things as much or more. We weigh the risks versus the benefits, and often the benefits trump the privacy risks.”

That is not to say there is no privacy problem. It is a brave new world, where new applications create startling new ways of interacting, not all of them pleasant or instantly comfortable. Consider some recent examples:

    – Photo applications can now use pattern matching algorithms to take “tagged” faces from one set of photos and find matches across their very large dataset.
    – Facebook is in the process of settling a series of lawsuits over its ill-fated Beacon service, which reported back to Facebook actions taken by Facebook users elsewhere in the Infoverse for posting on their Facebook pages.
    – A recent survey found that a significant number of companies have not made compliance with the Payment Card Industry’s Data Security Standard a priority.
    – Loopt, which makes use of GPS data to tell cell phone users where their friends are, introduced a new service, Pulse, to provide real-time information about businesses and services based on a user’s physical location.
    – The EU recently adopted stricter rules requiring affirmative opt-in for cookies.

    What these and other examples suggest is that, as so often happens, the capacity for information technology to connect the dots in interesting and potentially valuable (and potentially embarrassing) ways regularly outpaces our ability to adjust to the possibility.  It is only after the fact that we can decide if, how, and when we want to take advantage of these tools.

    There are real privacy issues to be considered, but they are far more subtle and far more ambiguous than the frenzied attendees of the FTC’s conference would have us—or themselves, more likely—believe.

    It’s not, in other words, like we need to militarize consumers to reflect their privacy “attitudes” in their doggedly contrary online behavior. Rather, we need to study the behavior, as only a few researchers (notably UC Riverside marketing professors Donna Hoffman and Tom Novak) actually bother to do. It is, after all, much easier to design self-congratulatory surveys and pontificate abstract privacy theory than it is to study consumer behavior in large-scale. (More fun, too.)

    Until we can begin to talk sanely and sensibly about the costs and benefits of information generation, collection, and use, regulators are well-advised to do very little by way of remedies for the wrong set of problems. (So far, the FTC and other U.S. agencies have, thankfully, done very little privacy legislating and rulemaking.) Businesses would be smart to adopt information security practices that should have been standard a generation ago, and educate their customers about their commitment to doing so.

    As for consumers—well, consumers will do what they always do—vote with their wallets.

    And please, pay no attention to the frantic man behind the screen. Even if he insists on giving you his name, email address, and, heaven forbid, his birthday.

The Persistent Myths of Identity Theft

ftc logoLaw Six of The Laws of Disruption deals with the myths and realities of Internet crime.  It’s a subject that’s bothered me for a long time.  Back in the Stone Age (1995), John Perry Barlow and I wrote a Position Paper for Computer Sciences Corporation titled, “Five Privacy and Security Imperatives for Electronic Trade.”   (It’s so old I can’t even provide a link!)

This was before there was any electronic trade, or what came to be known (when it arrived) as e-commerce.  This was in the era where people were saying things like, “No one will ever give their credit card number out over the Internet.”  (Never start a sentence with “no one will ever,” especially when it relates to technology.)

The problem was that most of the people saying “no one will ever” worked for banks and credit card companies.  Many of them were clients of our research program.  They were overwhelmed by the idea of e-commerce.  Technically, they didn’t know how they would integrate their private networks with the public Internet.  From a business standpoint, they didn’t know how they could make it cost-effective to process what were expected to be smaller-dollar transactions in high volume from a new kind of merchant population.  Not to be unkind, but much of the fear surrounding e-commerce was generated to hold back the flood while these companies looked for ways to build dams.

Eventually these problems were resolved, but the fear-mongering has had a lasting effect.  In 2001, according to the Pew Internet & American Life Project, 87% of Americans said they were concerned about credit card theft online; by 2008 it was down only marginally.  Yet by 2009 over 50% of all American adults had paid online with a credit card anyway.

In the interim, of course, an entire industry has emerged with a strong incentive to keep the fear numbers high.  Companies that make money selling anti-virus software, credit reports, identity theft insurance and alternative payment methods (e.g., PayPal) stoke the fears of users that only a fool would ever type his or her credit card number into a web browser.

Identity theft is real, but for those who have been victims of it, generally the loss of money is the least of its damage (banks and credit card companies are legally obliged to return money fraudulently obtained from a customer’s account).  Restoring credit history and credit scores is where the real crimes take place, and the perpetrators are often the consumer’s own financial services providers.

The recent indictment of three men in the theft of 130 million credit card numbers is a good example of the continued obfuscation employed by the industry and their counterparts at the Federal Trade Commission, confusion often left unchallenged by journalists.  The thieves, an American named Albert Gonzalez and his offshore co-conspirators, broke into corporate networks of payment processors as well as major retailers including 7-11 and TJ Maxx.  When Gonzalez, plead guilty, the Associated Press described him as “masterminding one of the largest cases of identity theft in U.S. history.”  Reuters called it “one of the largest identity-theft crimes on record.”

Stealing credit card numbers from corporate computers is a serious crime, but it is not “identity theft.”

The problem is that “identity theft” has come to mean many different things, including what we may now think of as the quaint form where consumers give their credit card number online to a scam artist, often in response to a fake email message purporting to be from their bank or other payment processors.  The scammer uses or sells the number to open new accounts, make fraudulent withdrawals or charges, and otherwise pass himself off as if he was the victim.  (See my 2005 article, “If Feds Fail, What Can Stop Identity Theft?”)

But that’s small potatoes compared to the kind of crime Gonzalez and his colleagues commit, where millions of credit card numbers are stolen and then sold.  Most of these, however, don’t actually result in identity theft—the credit card numbers are used to get cash and merchandise and are quickly disabled by software that recognizes dubious transactions.  Again the financial losses here are borne by the banks and credit card processors, not the consumers or the merchants.  That’s why the software is good and getting better.  It’s their money at stake.

No one’s “identity” is being stolen, but the use of the term to describe every financial fraud involving a computer amps up the terror level of consumers who largely have nothing to fear.  The vast majority of “real” identity theft has nothing to do with computers at all, but rather  begins with a stolen or lost wallet, stolen or simply discarded mail, or inside jobs pulled by clerks and others with legitimate access to the data.

The real problems are on the back-end, where credit card systems are left insufficiently secured, or where laptops with sensitive data are left in the back seats of cars where they are stolen not for the data but for the hardware.  We keep hearing horror stories of government employees, university officials, and private sector employees who can’t even be bothered to put password protection on their logins, let alone encrypt their data.  And the continued use of social security numbers by private enterprises both as a customer ID and an authentication field is probably the most dangerous practice of all.

Oddly enough, these were exactly the problems Barlow and I pointed out in 1995.  The solutions were obvious then, and they’re still obvious now.  But as long as consumers are being misdirected to think it’s their behavior that needs to be controlled, the financial services industry can avoid solving their largely self-made problems.

Meanwhile, electronic commerce doesn’t grow as quickly as it could.

If anyone wants a hardcopy of my 1995 position paper, I’m happy to send it along!

Once More into the Tar Pits of Privacy Policy

No doubt the gooiest problem at the intersection of technology and law continues to be what is unhelpfully referred to as “privacy.” I’ve clipped five articles in just the last week on the subject, including several about Facebook’s efforts to appease users and government regulators in Canada, demands by Switzerland to Google to stop its “street view” application, and a report from Information Week about proposals from a coalition of U.S. public interest groups for new legislation to beef up U.S. privacy law.

The term “privacy” is unhelpful because, as I explain in Chapter 3 of “The Laws of Disruption,” it is a very broad term applied to a wide range of problems. “Privacy” is shorthand for problems including government surveillance, criminal misuse of information, concealment of personal information from friends and family, and protection of potentially harmful or embarrassing information from employers and other private parties (e.g., insurance firms) who might use it to change the terms of business interactions (e.g., increasing premiums when your vehicle’s RFID toll tag tracks your tendency to speed.)

Unfortunately, all of these problems are included in discussions about privacy, and in many cases it’s clear the different parties in the conversation are actually talking about very different aspects of the problem.

Beyond the definitional issues, there is also what one might think of as the irrational or emotional side of privacy. Instinctively, and for different cultural reasons, most of us have a strong reaction to new products and services that will make use of what we think of as intimate facts about ourselves and our activities, even when the goal of that use is to improve the usefulness of such products or make them available at a lower cost.

In the U.S., the national character of the “frontier” nation stressed the desire of eccentric individuals to get away from the moral, religious, or social strictures of European life.

In Europe, historical events in which private information was abused to horrific ends (the Inquisition, the Holocaust, the oppression of life in the Soviet Union and its satellites such as East Germany, where as many as one in three citizens were paid informants on the others) bubble just below the surface of the “debate” about “privacy.”

At one extreme, a small but vocal group of pseudo-millenialists believe that identification technologies are a signal of the coming of the Antichrist, as prophesized in the Book of Revelations.

Of course the problems faced by policy makers on a day-to-day basis seem modest, even trivial, in isolation. Users of Facebook applications (quizzes and the like) allow outside software developers to use the identity of their friends to pass scores around, or to challenge other users. Google Street View, which aims to enhance Google Maps with real photos of streets and houses, inadvertently and perhaps unavoidably take photos that show random but identifiable people and vehicles that happened to be present when the photos are taken.

Behavioral advertising aims to take contextual information about what users are doing online to present ads that are more likely to be of interest than the kind of random guessing that has historically been the realm of ads, such as those that might show up during a television program.

I have to be honest and say I too have a visceral reaction when a targeted ad pops up in an unexpected context, as happens regularly on Facebook, Google, and other applications where I might be engaged in a variety of personal and business communications.

It always reminds me of the time, in the early 1980’s, when I called in to a local cable access program in New York where a hippie astrologer gave consultations on the air, aided by a small, well-dressed man sitting next to her with a laptop computer. There was no tape delay on the show, and when I was on the air, the TV was literally talking directly to me, a true out-of-body experience. (The astrologer also gave profoundly good advice!)

Like most consumers, however, I quickly get over that response and realize that the appearance of intimacy, indeed of inappropriate intimacy, is just that—an appearance. Google isn’t trying to get photos of people who aren’t where they’re supposed to be. Facebook isn’t trying to undermine my personal relationships.

Behavioral ads appear to be personal, but the reality of course is that ALL of the processing is being done by cold, lifeless, uncaring computers. Gmail may “read” the contents of my messages in order to serve me certain ads. But Gmail is not a person, and there is no person or army of persons at Google sitting around reading my mail. No one, sad to say, would care enough to do so. I’m not worth blackmailing. And blackmailing is already a crime.

National governments and public interest groups can and will continue to impose new conditions on Internet products and services (Europeans, for example under Directive 95/46 have a powerful right against any use of their personally-identifiable information.)

The reality, however, is that such regulations are always straining for a balance between the visceral response to “new” privacy invasions and the benefits to consumers that comes from allowing the information to be used. That, in the case of business use of information, is always the goal, even if it’s consumers as a whole who benefit rather than the individual, as in my driving habits/insurance example. (Companies make lots of mistakes, and launch ill-conceived products and services, and some of the abuses have been spectacular and public. Criminal use, such as identity theft, and government surveillance, again, are different problems.)

For the most part, consumers, if only unconsciously, seem to know how to weight the pluses and minuses of new uses of “private” information and decide which ones to allow. I don’t mean to suggest that the market is always right, and never needs external correction. But for every change in privacy law that requires new disclosures, opt-in or opt-out provisions, and other consumer protection, it doesn’t seem to take long for most if not the vast majority of consumers to agree to let the information flow where it may. Information, even private information, wants to be free.