Category Archives: Privacy

Updates to the Media Page

We’ve added about a dozen new posts to the Media Page on my website, reflecting a sampling of articles, media quotes, and radio appearances from the last few months. These include several pieces for CNET News.com and Forbes, as well as links to appearances on NPR’s “Science Friday” (debating Sen. Al Franken on privacy law) and “Marketplace.”

I continue to be called on to help business leaders understand the confusing and dangerous new interest that national, state and local governments are taking in the “management” of the digital economy. I’ve been speaking most recently about Apple’s iPhone privacy flap (which turned out to have nothing to do with privacy), the AT&T/T-Mobile merger, and pending legislation in Congress aimed at curbing online piracy of movies and trademarked goods, the so-called “Protect IP” Act.

Next week, I’ll be making my tenth visit this year to Washington to meet with Congressional staffers and other policy makers to discuss these and other worrisome developments. Increasingly, my role seems to be as an unofficial representative of Silicon Valley helping regulators see the potential damage to innovation from ill-considered laws.

Of course I continue my long-standing work with companies working to introduce new products and services that exploit digital technology. The introduction of “killer apps” only gets faster with time, and more than ten years since the publication of my first book, I’m deeply flattered to hear from entrepreneurs who tell me the book still works as a manual for success in the digital age.

The iPhone flap and the anatomy of a privacy panic

I’ve written a long article this morning for CNET (See “Privacy panic debate:  Whose data is it?”) on the discovery of the iPhone location tracking file and the utterly predictable panic response that followed.  Its life-cycle follows precisely the crisis model Adam Thierer has so frequently and eloquently traced, most recently at the Technology Liberation Front.

In particular, the CNET article takes a close and serious look at Richard Thaler’s column in Saturday’s New York Times, “Show us the data.  (It’s ours, after all.)” Thaler uses the iPhone scare as occassion to propose a regulatory fix to the “problem” of users being unable to access in “computer-friendly form” copies of the information “collected on” them by merchants. 

That information, Thaler assumes, is a discreet kind of property and must, since it refers to customer behavior, be the sole property of the customer, “lent” to the merchant and reclaimable at any time.

Information can certainly be treated as if it were property, and often is under law.  Personally, I don’t find the property metaphor to be the most useful in dealing with intangibles, but if you’re going to go there you need to understand the economics of how information behaves in ways very different to physical property.  (See my chapter on the subject in “The Next Digital Decade.”)

Thaler’s “proposed rule” is wrong on the facts (he doesn’t seem to know how cell phone bills really look, and he certainly doesn’t understand how supermarket club cards operate–and these are his leading examples of the “problem”), wrong on the law, and even wrong on the business and economics.  (Other than that, it’s a pretty good article!)

This kind of intellectual frivolity is par for the course with many academic economists.  Thaler is at the University of Chicago’s business school, and describes himself as an economist and behavioral scientist.  That means instead of throwing around calculus all day, he devises toy experiments with a few subjects–or reads the findings of other behavioral scientists who have done the same.

Not only is the article bad privacy policy, it’s bad economics.  The latter is certainly the more serious concern.  Nearly 70 years after Ronald Coase called on economists to put down the pencil and paper methods and do actual empirical research in how markets actually work, the profession has if anything become more insular.  There are exceptions, of course, but they stand out in a field of mediocrity.

Which is too bad.  We need good economists now, more than ever.

 

 

 

Congress's Tech Agenda: Something Old, Something Older

I reported for CNET yesterday on highlights from the State of The Net 2011 conference, sponsored by the Advisory Committee to the Congressional Internet Caucus.  Though I didn’t attend last year’s event, I suspect much of the conversation hasn’t changed.

For an event that took place nearly a month after the FCC’s “final” vote on net neutrality, the issue seems not to have quieted down in the least.  A fiery speech from Congresswoman Martha Blackburn promised a “Congressional hurricane” in response to the FCC’s perceived ultra vires decision to regulate where Congress has refused to give it authority, a view supported by House and Senate counsel who spoke later in the day.

There seemed to be agreement from Republicans and Democrats that undoing the Open Internet Report and Order was the Republicans’ top priority on the tech agenda.  Blackburn has already introduced a bill, with at least one Democratic co-sponsor, to make clear (clearer?) that the FCC has no authority to regulate any Internet activity.  And everyone agreed that the Republicans would move forward with a resolution of disapproval under the Congressional Review Act, and that the resolution would pass the House and probably the Senate.  (Such resolutions are filibuster-proof, so Senate Republicans would need only a few Democrats.)

House Energy and Commerce senior counsel Neil Fried had mentioned the CRA resolution at CES a few weeks ago.  But now it’s been upgraded from a possibility to a likelihood.

The disagreement comes over whether President Obama would veto the resolution. Speculating in a vacuum, as many participants did, doesn’t really help.   The answer will ultimately depend on what other horse trading is in progress at the time.  (See:  tax cuts, health care, etc.)  Much as those of us who follow net neutrality may think it’s the center of the political universe, the reality is that it could easily become a bargaining chip.

That’s especially so given that almost no one was happy with the rules as they were finally approved.   Among advocates, opponents, and even among the five FCC Commissioners, only Chairman Genachowski had any enthusiasm for Order.  (He may be the only enthusiast, full stop.  On a panel on which I participated on the second day, advocates for net neutrality were tepid in their support of the Order or its prospects in court.  I think tepid is being generous.)

And everyone agreed that there would be legal challenges based on the FCC’s dubious statutory authority.  Amy Schatz of the Wall Street Journal said she knew of several lawyers in town shopping for friendly courts, and that pro-regulation advocates may themselves challenge the rule.  Timing could be important, or not.

Beyond net neutrality, which seems likely to dominate the tech agenda for the first six months of the new Congress, bi-partisan words were flung over the need to resolve the imminent (arrived?) “spectrum crisis,” and to reform the bloated and creaky Universal Service Fund.  These, it’s worth remembering, were two of the top priorities from last year’s National Broadband Plan, which sadly disappeared into the memory hole soon after publication.

Other possible agenda items I heard over the course of the two day event, but much farther down the list:  revival of COICA (giving DHS new powers to seize domains used for trademark and copyright violations), privacy, cloud computing, cybersecurity, ECPA reform, retransmission, inter-carrier compensation, and Comcast/NBC merger.  I missed a few panels, so I’m sure there was more.

What are the chances any of these conversations will actually generate new law?  Anybody?

Updates to the media page

The fall has been filled with important developments in the technology world, and I continue to be a regular source for journalists as well as publishing frequent editorials and analyses of my own.  I’ve just posted another ten items to the Media Page of my website, including several articles I’ve written for CNET News.com, an election-day op-ed in Roll Call, legal analysis for The Wall Street Journal and a long review of “The Laws of Disruption” in the International Journal of Communications.  The accidents continue to pile up at the dangerous intersection of innovation and the law, the main theme of The Laws of Disruption.

Some highlights:

The U.S. Supreme Court heard arguments in EMA v. Schwarzenegger, which challenges California’s ban on violent video games on First Amendment ground.  My article for CNET explained why the timing of the case is significant, with implications for all new media enterprises.

The European Commission is preparing new legislation to guarantee its citizens a “right to be forgotten.  On CNET, I explain why that well-intentioned initiative could have disastrous consequences for the digital economy.

My election-day op-ed for Roll Call, the leading newspaper of Capitol Hill, urged Congress to stop the FCC’s dangerous plans to “reclassify” broadband Internet access and treat it like 1930’s-style telephone business.

My detailed analysis of Rep. Henry Waxman’s proposed net neutrality bill, a last-minute effort to resolve the long-running conflict before the election, was featured on The Wall Street Journal’s “All Things Digital.”

In the important Vernor decision, the Court of Appeals in California ruled that licensing agreements that deny users a right to resell copies of software are enforceable.  Though many viewed this decision as harmful to consumers, I explain why developments in the software industry have already relegated license agreements to the margins, in a controversial article for CNET News.com.

NextGenWeb, sponsored by the U.S. Telecom Association, interviewed me one of many recent visits to Washington.

As the new Congress prepares to convene in January, watch for more important developments.

Europe Reinvents the Memory Hole

Inspired by thoughtful pieces by Mike Masnick on Techdirt and L. Gordon Crovitz’s column yesterday in The Wall Street Journal, I wrote a perspective piece this morning for CNET regarding the European Commission’s recently proposed “right to be forgotten.”

A Nov. 4th report promises new legislation next year “clarifying” this right under EU law, suggesting not only that the Commission thinks it’s a good idea but, even more surprising, that it already exists under the landmark 1995 Privacy Directive.

What is the “right to be forgotten”?  The report is cryptic and awkward on this important point, describing “the so-called ‘right to be forgotten’, i.e. the right of individuals to have their data no longer processed and deleted when they [that is, the data] are no longer needed for legitimate purposes.”

The devil, of course, will be in the forthcoming details.  But it’s important to understand that under current EU law, the phrase “their data” doesn’t just mean information a user supplies to a website, social network, or email host.  Any information that refers to or identifies an individual is considered private information under the control of the person to whom it refers.  So “their data” means anyone’s data, even if the individual identified had nothing to do with its collection or storage.

And EU law doesn’t just limit privacy protections to computer data. Users have the right to control information about them appearing in printed and other analog formats as well.

As I say in the piece, the “right to be forgotten” begins to sound like Big Brother’s “memory hole” in Orwell’s classic 1984.  But instead of Winston Smith “rectifying” newspaper articles at the direction of his faceless masters at the Ministry of Truth, a right to be forgotten creates a kind of personal memory hole.  Something you did in the past that you would prefer never happened?  Just issue orders to anyone who knows about, and force them to destroy any evidence.

Of course such a right would be as impractical to enforce as it is ill-conceived to grant.

Both Masnick and Crovitz, in particular, worry about the free speech implications of such a right, both for the press and for individuals.  And those are indeed potentially catastrophic.  Having the power to rewrite history devalues any information, including information that hasn’t been erased.

The social contract operates on facts and the ability to sort out truth from lie.  A right to be forgotten gives every individual the power to rewrite that contract whenever they feel like.  So who would sensibly enter into such a relationship in the first place?

My concern, however, is even more metaphysical.  The privacy debate currently going on in public policy circles is disturbing, perhaps most of all because it is being framed as a policy discussion.  Rather than work out what costs and benefits we get from increased information sharing with each other, those who are feeling anxious about the pace of change in digital life are running, as anxious people often do, to regulators, demanding they do something—anything—to alleviate their future shock.  And regulators, who are pretty anxious people themselves, are too-often happy to oblige, even when they understand neither the technology nor the implications of their lawmaking.

Beyond the worst possible choice of forum to begin a conversation, the privacy debate in its current form is no debate at all.  It is mostly a bunch of emotional people hurling rhetorical platitudes at each other, trading the worst-case examples of the deadly potential of privacy invasions (teen suicides, evil corporations) with fear-inspiring claims of the risk of keeping information secret (terrorists win).

It’s not really a debate at all when the two “sides” are talking about entirely different subjects.  And when no one’s really listening anyway. All that is happening is that the stress level amps up, and those not participating in the discussion get the distinct impression that the world is about to end.

A starting point for a real conversation about privacy—one that is dangerously absent from any of the current lawmaking efforts—is an understanding about the nature of information.  Privacy in general and a right to be forgotten specifically begins with the false assumption that information (private or otherwise) is a kind of property, a discrete, physical item that can be controlled, owned, traded, used up, and destroyed.  (Both “sides” have fallen into this trap, and can’t seem to get out.)

The fight often breaks down into questions of entitlement—who initially owns the information that refers to me?  The person who found it and translated it into a form that could be accessed by others, or the person to whom it refers, regardless of source?  Under what conditions can it be transferred?  Does the individual maintain a universal and inalienable right of rescission—the ability to take it back later, for any reason, and without compensating the person who now has it?

But these are the wrong questions to be asking in the first place.  Information isn’t property, at least not as understood by our industrial-age legal system or popular metaphors of ownership.  Information, from an economic standpoint, is a virtual good.  It can be “possessed” and used by everyone at the same time.  It can become more valuable in being combined with other information.  It can maintain or improve its value forever.

And, whether the law says so or not, it can’t be repossessed, put back in the safety deposit box, buried at sea, or “devoured by the flames” like the old newspaper articles Winston Smith rewrites when the truth turns out to be inconvenient to the past.  That of course was Orwell’s point.  You can send down the memory hole the newspaper that reported Big Brother’s promise of increased chocolate rations, but people still remember that he said it.  You can try to brainwash them, too, and limit their choice of language to eliminate the possibility of unsanctioned thoughts.  You can destroy the individual who rebels against such efforts.

But it still doesn’t work.  The facts, warts and all, are still there, even when their continued existence is subjectively embarrassing to an individual.  Believe me, I wish sometimes it were otherwise.  I would very much like to “rectify” high school, or my parents, or the recent death of my beloved dog.  The truth often hurts.

But burning all the libraries and erasing all the bits in the world doesn’t change the facts.  It just makes them harder to access.  And that makes it harder to learn anything from them.

Maybe the European Commission was just being sloppy in its choice of words.  Perhaps it has something much more limited in mind for a “right to be forgotten.”  Or perhaps as it begins the ugly process of writing actual directives that must then be implemented in law by member countries, it will see both the impossibility and danger of going down this path.

Perhaps they’ll then pretend they never actually promised to “clarify” such a right in the first place.

But we’ll all know that they did.  For whatever it’s worth.

Meditations in a Privacy Emergency

Emotions ran high at this week’s Privacy Identity and Innovation conference in Seattle.  They usually do when the topic of privacy and technology is raised, and to me that was the real take-away from the event.

As expected, the organizers did an excellent job providing attendees with provocative panels, presentations and keynotes talks—in particular an excellent presentation from my former UC Berkeley colleague Marc Davis, who has just joined Microsoft.

There were smart ideas from several entrepreneurs working on privacy-related startups, and deep thinking from academics, lawyers and policy analysts.

There were deep dives into new products from Intel, European history and the metaphysics of identity.

But what interested me most was just how emotional everyone gets at the mere  mention of private information, or what is known in the legal trade as  “personally-identifiable” information.  People get enervated just thinking about how it is being generated, collected, distributed and monetized as part of the evolution of digital life.  And pointing out that someone is having an emotional reaction often generates one that is even more primal.

Privacy, like the related problems of copyright, security, and net neutrality, is often seen as a binary issue.  Either you believe governments and corporations are evil entities determined to strip citizens and consumers of all human dignity or you think, as leading tech CEOs have the unfortunate habit of repeating, that privacy is long gone, get over it.

But many of the individual problems that come up are much more subtle that that.  Think of Google Street View, which has generated investigations and litigation around the world, particularly in Germany where, as Jeff Jarvis pointed out, Germans think nothing of naked co-ed saunas.

Or how about targeted or personalized or, depending on your conclusion about it, “behavioral” advertising?  Without it, whether on broadcast TV or the web, we don’t get great free content.  And besides, the more targeted advertising is, the less we have to look at ads for stuff we aren’t the least bit interested in and the more likely that an ad isn’t just an annoyance but is actually helpful.

On the other hand, ads that suggest products and services I might specifically be interested in are “creepy.”  (I find them creepy, but I expect I’ll get used to it, especially when they work.)

And what about governments?  Governments shouldn’t be spying on their citizens, but at the same time we’re furious when bad guys aren’t immediately caught using every ounce of surveillance technology in the arsenal.

Search engines, mobile phone carriers and others are berated for retaining data (most of it not even linked to individuals, or at least not directly) and at the same time are required to retain it for law enforcement purposes.  The only difference is the proposed use of the information (spying vs. public safety), which can only be known after data collection.

As comments from Jeff Jarvis and Andrew Keen in particular got the audience riled up, I found myself having an increasingly familiar but strange response.  The more contentious and emotional the discussion became, the more I found myself agreeing with everything everyone was saying, including those who appeared to be violently disagreeing.

We should divulge absolutely everything about ourselves!  No one should have any information about us without our permission, which governments should oversee because we’re too stupid to know when not to give it!  We need regulators to protect us from corporations; we need civil rights to protect us from regulators.

Logical Systems and Non-Rational Responses

I can think of at least two important explanations for this paradox.  The first is a mismatch of thought systems.  Conferences, panel discussions, essays and regulation are all premised on rational thinking, logic, and reason.  But the more the subject of these conversations turns to information that describes our behavior, our thoughts, and our preferences, the more the natural response is not rational but emotional.

Try having a logical conversation with an infant—or a dog, or a significant other who is upset–about its immediate needs.  Try convincing someone that their religion is wrong.  Try reasoning your way out of or into a sexual preference.  It just doesn’t work.

Which raises at least one interesting problem.  Privacy is not only an emotional subject, it’s also increasingly a profitable one.  According to a recent Wall Street Journal article, venture capitalists are now pouring millions into privacy-related startups.  Intel just offered $8 billion for security service provider McAfee.  Every time Facebook blinks, the blogosphere lights up.

So the mismatch of thought systems will lead to more, not fewer, collisions all the time.

Given that, how does a company develop a strategic plan in the face of unpredictable and emotional response from potential users, the media, and regulators?  Strategic planning, to the extent anyone really does it seriously, is based on cold, hard facts—as far from emotion as its practitioners can possibly get.  The patron saint of management science, after all, is Frederick Winslow Taylor who, among other things, invented time-and-motion studies to achieve maximum efficiency of human “machines.”

But the rational vehicle of planning simply crumples against the brick wall of emotion.

As I wrote in an early chapter of “The Laws of Disruption,” for example, companies experimenting with early prototypes of radio frequency ID tags (still not ready for mass deployment ten years later) could never have predicted the violent protests that accompanied tests of the tags in warehouses and factories.

Much of that protest was led by a woman who believes that RFID tags are literally the technology prophesied by the Book of Revelations as the sign of the Antichrist.  Assuming one is not an agent of the devil, or in any case isn’t aware that one is, how do you plan for that response?

The more that intimacy becomes a feature of products and services, including products and services aimed at managing intimate information, the more the logical religion of management science will need to incorporate non-rational approaches to management, scenario planning and economics.

It won’t be easy—the science of management science isn’t very scientific in the first place and, as I just said, changing someone’s religion doesn’t happen through rational arguments—the kind I’m making right now.

The Bankruptcy of the Property Metaphor for Information

The second problem that kept hitting me over the head during PII 2010 was one of linguistics.  Which is:  the language everyone uses to talk about (or around) privacy.  We speak of ownership, stealing, tracking, hijacking, and controlling.  This is the language of personal property, and it’s an even worse fit for the privacy conversation than is the mental discipline of logic.

In discussions about information of any kind, including creative works as well as privacy and security, the prevailing metaphor is to talk about information as a kind of possession.  What kind?  That’s part of the problem.  Given the youth of digital life and the early evolution of our information economy, most of us really only understand one kind of property, and that is where our minds inevitably and often unintentionally go.

We think of property as the moveable, tangible variety—cattle, collectibles, commodities–that in legal terminology goes by the name “chattels.”

Only now has that metaphor become a serious obstacle.  While there has been a market for information for centuries, the revolutionary feature of digital life is that is has, for the first time in human history, separated information from the physical containers in which it has traditionally been encapsulated, packaged, transported, retailed, and consumed.

A book is not the ideas in the book, but a book can be bought, sold, controlled, and destroyed.  A computer tape containing credit card transactions is not the decision-making process of the buyers and sellers of those transactions, but a tape can be lost, stolen, or sold.

When information could only be used by first reducing it to physical artifacts, the property metaphor more-or-less worked.  Control the means of production, and you controlled the flow of information.  When Gutenberg perfected movable type, the first thing he published was the Bible—but in German, not Latin.  Hand-made manuscripts and a dead language gave the medieval Catholic Church a monopoly on the mystical.  Turn the means of production over to the people and you have the Protestant Reformation and the beginning of censorship–a legal control on information.

The digital revolution makes the liberation of information all the more potent.  Yet in all conversations about information value, most of us move seamlessly and dangerously between the medium—the artifact—and the message—the information.

But now that information can be used in a variety of productive and destructive ways without ever taking a tangible form, the property metaphor has become bankrupt.  Information is not property the way a barrel of oil is property.   The barrel of oil can only be possessed by one person at a time.  It can be converted, but only once, to lubricants, gasoline, or remain in crude form.  Once the oil is burned, the property is gone.  In the meantime, the barrel of oil can be stolen, tracked, and moved from one jurisdiction to another.

Digital information isn’t like that.  Everyone can use it at the same time.  It exists everywhere and nowhere.  Once it’s used, it’s still there, and often more valuable for having been used.  It can be remixed, modified, and adapted in ways that create new uses, even as the original information remains intact and usable in the original form.

Tangible property obeys the law of supply and demand, as does information forced into tangible containers.  But information set free from the mortal coil obeys only the law of networks, where value is a function of use and not of scarcity.

But once the privacy conversation (as well as the copyright conversation) enters the realm of the property metaphor, the cognitive dissonance of thinking everyone is right (or wrong) begins.  Are users of copyrighted content “pirates”?  Or are copyright holders “hoarders”?  Yes.

(“Intellectual property,” as I’ve come to accept, is an oxymoron.  That’s hard for an IP lawyer to admit!)

It’s true that there are other kinds of property that might better fit our emerging information markets.  Real estate (land) is tangible but immovable.   Use rights (e.g., a ticket to a movie theater, the right to drill under someone’s land or to block their view) are also long established.

But both the legal framework and the economic theory describing these kinds of property are underdeveloped at the very least.  Convincing everyone to shift their property paradigm would be hard when the new location is so barren.

Here are a few examples of the problem from the conference.  What term would make consumers most comfortable with a product that helps them protect their privacy, one speaker asked the audience.  Do we prefer “bank,” “vault,” “dossier,” “account” etc.?

“Shouldn’t consumers own their own information?” an attendee asked, a double misuse of the word “own.”   Do you mean the media on which information may be stored or transferred, or do you mean the inherent value of the bits (which is nothing)?  In what sense is information that describes characteristics or behaviors of an individual that person’s “own” information?

And what does it mean to “own” that information?  Does ownership bring with it the related concepts of being bought, sold, transferred, shared, waived?  What about information that is created by combining information—whether we are talking about Wikipedia or targeted advertising?  Does everyone or no one own it?

And by ownership, do we mean the rights to derive all value from it, even when what makes information valuable is the combining, processing, analyzing and repurposing done by others?  Doesn’t that part of the value generation count for something in divvying up the monetization of the resulting information products and services?  Or perhaps everything?

Human beings need metaphors to discuss intangible concepts like immortality, depression, and information.  But increasingly I believe that the property metaphor applied to information is doing more harm than good.  It makes every conversation about privacy a conversation of generalizations, and generalizations encourage the visceral responses that make it impossible to make any progress.

Perhaps that’s why survey after survey reveals both that consumers care very much about the erosion of a zone of privacy in their increasingly digital lives and, at the same time, give up intimate information the moment a website asks them for it.  (I agree with everything and its opposite.)

There’s also a more insidious use of language and metaphor to steer the conversation toward one view of property or another—privacy as personal property or privacy as community property.  Consider, for example, how the question is asked, e.g.:

“My cell phone tracks where I go”

or

“My cell phone can tell me where I am.”

A recent series of articles in The Wall Street Journal dealing with privacy (I won’t bother linking to it, because the Journal believes the information in those articles is private and property and won’t share it unless you pay for a subscription, but here is a “free” transcript of a conversation with the author of the articles on NPR’s “Fresh Air”) made many factual errors in describing current practices in on-line advertising.  But those aside, what made the articles sensational was not so much what they reported but the adjectives and pronouns that went with the facts.

Companies know a lot “about you,” for example, from your web surfing habits (in fact they know nothing about “you,” but rather about your computer, whoever may be using it), cookies are a kind of “surveillance technology” that “track” where “you” go and what “you do,” and often “spawn” themselves without “your” knowledge.

Assumptions about the meaning of loaded terms such as ownership, identity and what it means for information to be private poison the conversation.  But anyone raising that point is immediately accused of shilling for corporations or law enforcement agencies who don’t want the conversation to happen at all.

A User and Use-based Model – Productive and Destructive Uses

So if the property metaphor is failing to advance an important conversation—both of a business and policy nature—what metaphor works better?

As I wrote in “Laws of Disruption,” I think a better way to talk about information as an economic good is to focus on information users and information uses.  “Private” information, for starters, is private only depending on the potential user.  Whether it is our spouse, employer, an advertiser or a law-enforcement agent, in other words, can make all the difference in the world as to whether I consider some information private or not.  Context is nearly everything.

Example:  Is location tracking software on cell phones or embedded chips an invasion of privacy?  It is if a government agency is intercepting the signals, and using them to (fill in the blank).  But ask a parent who is trying to find a missing child, or an adult child trying to find a missing and demented parent.  It’s not the technology; it’s the user and the use.

Use, likewise, often empties much of the emotional baggage that goes with conversations about privacy in the abstract.  A website asks for my credit card number—is that an invasion of my privacy?  Well not if I’m trying to pay for my new television set from Amazon with a credit card.   On the other hand, if I’m signing up for an email newsletter that is free, there’s certainly something suspicious about the question.

To simplify a long discussion, I prefer to talk about information of all varieties through a lens of “productive” (uses that add value to information, e.g., collaboration) and “destructive” (uses that reduce the value of information, e.g., “identity” “theft”).  Though it may not be a perfect metaphor (many uses can be both productive and destructive, and the metrics for weighing both are undeveloped at best), I find it works much better in conversations about the business and policy of information.

That is, assuming one isn’t simply in the mood to vent and rant, which can also be fun, if not productive.