Category Archives: Information Economics

Europe Reinvents the Memory Hole

Inspired by thoughtful pieces by Mike Masnick on Techdirt and L. Gordon Crovitz’s column yesterday in The Wall Street Journal, I wrote a perspective piece this morning for CNET regarding the European Commission’s recently proposed “right to be forgotten.”

A Nov. 4th report promises new legislation next year “clarifying” this right under EU law, suggesting not only that the Commission thinks it’s a good idea but, even more surprising, that it already exists under the landmark 1995 Privacy Directive.

What is the “right to be forgotten”?  The report is cryptic and awkward on this important point, describing “the so-called ‘right to be forgotten’, i.e. the right of individuals to have their data no longer processed and deleted when they [that is, the data] are no longer needed for legitimate purposes.”

The devil, of course, will be in the forthcoming details.  But it’s important to understand that under current EU law, the phrase “their data” doesn’t just mean information a user supplies to a website, social network, or email host.  Any information that refers to or identifies an individual is considered private information under the control of the person to whom it refers.  So “their data” means anyone’s data, even if the individual identified had nothing to do with its collection or storage.

And EU law doesn’t just limit privacy protections to computer data. Users have the right to control information about them appearing in printed and other analog formats as well.

As I say in the piece, the “right to be forgotten” begins to sound like Big Brother’s “memory hole” in Orwell’s classic 1984.  But instead of Winston Smith “rectifying” newspaper articles at the direction of his faceless masters at the Ministry of Truth, a right to be forgotten creates a kind of personal memory hole.  Something you did in the past that you would prefer never happened?  Just issue orders to anyone who knows about, and force them to destroy any evidence.

Of course such a right would be as impractical to enforce as it is ill-conceived to grant.

Both Masnick and Crovitz, in particular, worry about the free speech implications of such a right, both for the press and for individuals.  And those are indeed potentially catastrophic.  Having the power to rewrite history devalues any information, including information that hasn’t been erased.

The social contract operates on facts and the ability to sort out truth from lie.  A right to be forgotten gives every individual the power to rewrite that contract whenever they feel like.  So who would sensibly enter into such a relationship in the first place?

My concern, however, is even more metaphysical.  The privacy debate currently going on in public policy circles is disturbing, perhaps most of all because it is being framed as a policy discussion.  Rather than work out what costs and benefits we get from increased information sharing with each other, those who are feeling anxious about the pace of change in digital life are running, as anxious people often do, to regulators, demanding they do something—anything—to alleviate their future shock.  And regulators, who are pretty anxious people themselves, are too-often happy to oblige, even when they understand neither the technology nor the implications of their lawmaking.

Beyond the worst possible choice of forum to begin a conversation, the privacy debate in its current form is no debate at all.  It is mostly a bunch of emotional people hurling rhetorical platitudes at each other, trading the worst-case examples of the deadly potential of privacy invasions (teen suicides, evil corporations) with fear-inspiring claims of the risk of keeping information secret (terrorists win).

It’s not really a debate at all when the two “sides” are talking about entirely different subjects.  And when no one’s really listening anyway. All that is happening is that the stress level amps up, and those not participating in the discussion get the distinct impression that the world is about to end.

A starting point for a real conversation about privacy—one that is dangerously absent from any of the current lawmaking efforts—is an understanding about the nature of information.  Privacy in general and a right to be forgotten specifically begins with the false assumption that information (private or otherwise) is a kind of property, a discrete, physical item that can be controlled, owned, traded, used up, and destroyed.  (Both “sides” have fallen into this trap, and can’t seem to get out.)

The fight often breaks down into questions of entitlement—who initially owns the information that refers to me?  The person who found it and translated it into a form that could be accessed by others, or the person to whom it refers, regardless of source?  Under what conditions can it be transferred?  Does the individual maintain a universal and inalienable right of rescission—the ability to take it back later, for any reason, and without compensating the person who now has it?

But these are the wrong questions to be asking in the first place.  Information isn’t property, at least not as understood by our industrial-age legal system or popular metaphors of ownership.  Information, from an economic standpoint, is a virtual good.  It can be “possessed” and used by everyone at the same time.  It can become more valuable in being combined with other information.  It can maintain or improve its value forever.

And, whether the law says so or not, it can’t be repossessed, put back in the safety deposit box, buried at sea, or “devoured by the flames” like the old newspaper articles Winston Smith rewrites when the truth turns out to be inconvenient to the past.  That of course was Orwell’s point.  You can send down the memory hole the newspaper that reported Big Brother’s promise of increased chocolate rations, but people still remember that he said it.  You can try to brainwash them, too, and limit their choice of language to eliminate the possibility of unsanctioned thoughts.  You can destroy the individual who rebels against such efforts.

But it still doesn’t work.  The facts, warts and all, are still there, even when their continued existence is subjectively embarrassing to an individual.  Believe me, I wish sometimes it were otherwise.  I would very much like to “rectify” high school, or my parents, or the recent death of my beloved dog.  The truth often hurts.

But burning all the libraries and erasing all the bits in the world doesn’t change the facts.  It just makes them harder to access.  And that makes it harder to learn anything from them.

Maybe the European Commission was just being sloppy in its choice of words.  Perhaps it has something much more limited in mind for a “right to be forgotten.”  Or perhaps as it begins the ugly process of writing actual directives that must then be implemented in law by member countries, it will see both the impossibility and danger of going down this path.

Perhaps they’ll then pretend they never actually promised to “clarify” such a right in the first place.

But we’ll all know that they did.  For whatever it’s worth.

Violent Video Games and the First Amendment: The Supreme Court Decides

Today, the U.S. Supreme Court will hear arguments in Schwarzenegger v. EMA, a case that challenges California’s 2005 law banning the sale of “violent” video games to minors.  The law has yet to take effect, as rulings by lower federal courts have found the law to be an unconstitutional violation of the First Amendment.

There’s little doubt that banning the sale of nearly any content to adults violates the protections of Free Speech, including, as decided last year, video depictions of cruelty to animals.

But over the years the Court has ruled that minors do not stand equal to adults when it comes to the First Amendment.  The Court has upheld restrictions on the speech of students in and out of the classroom, for example, in the interest of preserving order in public schools.

And in the famous Pacifica case, the Court upheld fines levied against a radio station for airing the famous George Carlin monologue that, not-so-ironically, satirizes the FCC for banning seven particular words from being uttered over the public airwaves.

The basis for that decision was that children could be negatively influenced from hearing such language.  And children have easy access to radio and TV, while parents had no effective way to keep particular broadcasts out of the house.

In today’s argument, California’s legal arguments center largely on another case, the Supreme Court’s 1968 decision in Ginsberg.    There, the Court upheld state restrictions on the sale of pornography to minors, even though the material was protected speech for adult purchasers.

In Schwarzenegger v EMA, California is urging the Court to extend Ginsberg’s reasoning to include content that meets it definition for violent video games.  The statute defines “violent video games” as those “in which the range of options available to a player includes killing, maiming, dismembering, or sexually assaulting an image of a human being, if those acts are depicted” in a manner that “[a] reasonable person, considering the game as a whole, would find appeals to a deviant or morbid interest of minors,” that is “patently offensive to prevailing standards in the community as to what is suitable for minors,” and that “causes the game, as a whole, to lack serious literary, artistic, political, or scientific value for minors.”

Ginsberg, the state argues in its brief, upheld a ban the sale of sexual content to minors because such content is dangerous to their development.  So too, they argue here, with violent video games. (Parents and other adults, of course, could still buy the games for minors if the statute were to go into effect.)

Indeed, the state argues that such material has as much if not more of a negative impact on the development of children than does sexual material.

That, of course, is a question open to considerable debate.  After the fact, the state cites a number of academic studies that find a correlation between violent video game exposure (including games, such as Super Mario Brothers, well outside the the California definition) and anti-social behavior.  But, as excellent reply briefs from the Entertainment Merchants Association and a joint brief from the Electronic Frontier Foundation and the Progress and Freedom Foundation point out, the methodology in these studies has been roundly criticized.

Moreover, California doesn’t seem to understand that the statistical significance of a correlation does not necessarily translate to real-world behavior—correlation is not the same as causation, no matter how strong the statistics.  And even the authors of the studies most relied on by the state recognize that it isn’t clear in which direction the correlation moves—are children who play violent video games more likely to have violent thoughts because they played the game, or are pre-existing violent thoughts what attracts them to the games?

Why Video Games?  Why Now?

The Court may focus on those studies in its decision, but I have a different question.  Why are California and other states picking on video games, and why now?  That, to me, is the more interesting problem, one that gets little attention in the briefs and, I would guess, in the Court’s eventual decision.

Perhaps the why is obvious:  as EMA’s brief points out, similar attacks have accompanied the rise in popularity of every new form of media to emerge throughout U.S. history.

The California statute … is the latest in a long history of overreactions to new expressive media. In the past, comic books, true-crime novels, movies, rock music, and other new media have all been accused of harming our youth. In each case, the perceived threat later proved unfounded. Video games are no different.

The EFF/PFF brief goes farther, accusing California legislators of succumbing to “moral panic, as lawmakers have so often done when confronted with the media of a new generation.”

Examples as varied as Greek classics, the Bible, the Brothers Grimm and Star Wars all suggest, EMA points out, that extreme–even gruesome–violence has always been a favorite subject of literature, often aimed specifically at children.  As federal appellate judge Richard A. Posner wrote in rejecting a similar Indiana law, “Self defense, protection of others, dread of the ‘undead,’ fighting against overwhelming odds—these are all age-old themes of literature, and ones particularly appealing to the young.”

But why now?  The answer is, not surprisingly, Moore’s Law.  Laws regulating the content or distribution of video games are a classic example of the conflict I described in The Laws of Disruption.

As technology has made video game graphics more realistic and lifelike, they have captured the attention—and here the nightmares—of regulators in the real world who equate what they see on the screen with behaviors that would clearly violate laws and norms of the real world.  They don’t like what they see in games including Grand Theft Auto and Resident Evil, and their impulse is to find a way, somehow, to stop it, even if it’s only a simulation.

It was not that long ago—in my life time, in any case—that video games were still in their Neolithic Era.  Consider Pong, the first home video game from Atari in 1975.  It would take an imagination greater than mine to think of the batting of a block of monochrome pixels by a bar of pixels to be violent enough to corrupt youth; likewise the breaking of a wall of pixels one at a time in the follow-on game Breakout.

But a few years later, consider the commercial (courtesy of YouTube) for Activision’s ice hockey game.

The game promises to be one of the “roughest” video games ever, “battling for the puck” with “fierce body checking” and “ruthless tripping.”  Just watching the players fight it out drives a meek-looking Phil Hartman into a frenzy; within a few seconds he seems ready to attack the clerk who teases him that he’s not yet ready for it.

But despite an ad that explicitly suggests a connection between playing (or even watching the game) and becoming violent, the actual graphical quality of the violence is so disconnected from visual reality that it never occurred to any state legislature to ban or otherwise restrict it.

Now fast-forward just a few short decades later to the imminent release of Xbox 360’s Kinetics and one of the games that takes advantage of it called Kinectimals.

Using Microsoft’s new sensor technology, realistically-rendered animals can be controlled simply by issuing voice commands or by mimicking the desired movements by standing in front of the images.  It hardly seems possible that the same beings who invented Pong could have advanced to Kinectimals within the span of one human lifetime.  But we did.

Coupled with new 3D technology and increasingly large, high-fidelity displays, video games have in the course of only a few decades and a few cycles of Moore’s Law, advanced to the point of challenging the cinematic qualities of movies.  Indeed, games and films are converging, and now use much of the same technology to produce and to display.  A new sub-genre of user-produced content involves taking the cinematic interludes within the games and using them to produce original films.  After all, video game users today not only control game play but also lighting, camera angles, and point of view.

Why not?  As Nicholas Negroponte would say, bits are bits.

So now that video games offer fidelity in imagery and movement that is comparable to film, the law has awakened to both their positive and negative impacts on those who interact with them.  Since the First Amendment clearly doesn’t allow interference with the sale of violent content to adults, California focused on children.  But it’s clear from the tone of the state’s brief that they just plain don’t like certain video games, just as they didn’t like certain movies and certain books in an early age of mass-market technologies.  As before, they would like, if they could, to turn the clock back.

Of course that is always the response of the law to new technologies that challenge our conceptions of reality.  The only difference between the comic book burnings of the 1950’s and the emotional responses of legislators today is the speed with which those new technologies are arriving.  The killer apps come faster all the time.  And with them, the counter-revolutionaries.

Frozen in Time, Lost in Relevance

Which is why the California statute suffers from another common and fatal flaw of laws attempting to hold back new technologies:  early obsolescence.  Even if the Supreme Court upholds the law, its effect will be minimal at best.

Why?  Lost in the legal arguments (and reduced to a mere footnote in the EMA brief) is the impending anachronism of the California statute.  It assumes a world, disappearing almost as quickly as it arrived, in which video games are imported into California as physical media in packages, and sold in retail stores.

Consider, for example, Section 1746.2:

Each violent video game that is imported into or distributed in California for retail sale shall be labeled with a solid white “18” outlined in black. The “18” shall have dimensions of no less than 2 inches by 2 inches. The “18” shall be displayed on the front face of the video game package.

But sales of video games in media form are rapidly declining as broadband connections make it possible for game developers and platform manufacturers to transport the software over the Internet.  So even if the law is ruled constitutional, it will apply to an ever-shrinking portion of the video game market.  There will soon be no “retail sale” and no “front face” of a “package” onto which to put a label in the first place.

These industry changes, of course, aren’t being made to evade laws like California’s.  Digital distribution reduces costs and eliminates middlemen who add little or no value (the retailers, the packagers, the truckers).  More to the point, they allow the companies to establish on-going relationships with their customers, which can be leveraged to selling add-on chapters and levels, on-line play, and the sale of related product and content, including films and movies.

The industry, in other words, is not only evolving in terms of sophistication and realism of the product.  The same technologies are also scrambling its supply chain.  And what is emerging as the new model for “games” is something in which California and other states have almost no regulatory interest.

So it seems an odd time to target legislation at a particular and disappearing version of the industry’s content and retail channels.  Even if the Court upholds the California law, it will likely have little impact on the material at which it is aimed.

But that’s often the case with laws trying to manage the unpleasant social side effects of new technologies just as they become visible to the outside world.  The pace of legal change can’t hope to keep up with the pace of technological change, making this law, like many others, out-of-date even before the ink is dry.

Which is not to say that the Supreme Court’s decision in this case won’t matter.  Another feature of statutes like this, unfortunately, is a high likelihood of unintended consequences.  The potential for the Court’s decision—pro or con–to do mischief in the future, however, to unrelated industries and dissimilar content, is legion.

For example?  As the EFF/PFF brief points out, California and other states may try to extend the ban on sales to minors to online channels.  But it isn’t so easy to determine the age of an online buyer as someone in your brick-and-mortar store.  “Applying the law online would likely require mandatory age verification of all online gamers because the law prohibits any sale or rental to a minor,” EFF/PFF argues, “even if the vendor had no evidence that the buyer was a minor.”
That feature of an earlier federal effort to control pornography online was the undoing of the statute.

But in the Supreme Court, and the lower courts who interpret its decisions, anything can happen, and usually does.

Fox-Cablevision and The Net Neutrality Hammer

When the only thing you have is a hammer, as the old cliché goes, everything looks like a nail.

Net neutrality, as I first wrote in 2006, is a complicated issue at the accident-prone intersection of technology and policy.  But some of its most determined—one might say desperate—proponents are increasingly anxious to simplify the problem into political slogans with no melody and sound bites with no nutritional value.  Even as—perhaps precisely because—a “win-win-win” compromise seems imminent, the rhetorical excess is being amplified.  The feedback is deafening.

In one of the most bizarre efforts yet to make everything be about net neutrality, Public Knowledge issued several statements this week “condemning” Fox’s decision to prohibit access to its online programming from Cablevision internet users.  In doing so, the organization claims, Fox has committed “the grossest violations of the open Internet committed by a U.S. company.”

This despite the fact that the open Internet rules (pick whatever version you like) apply only to Internet access providers.  Indeed, the rules are understood principally as a protection for content providers.  You know, like Fox.

OK, let’s see how we got here.

The Fox-Cablevision Dispute

In response to a fee dispute between the two companies, Fox on Saturday pulled its programming from the Cablevision system, and blocked Cablevision internet users from accessing Fox programming on-line.  Separately, Hulu.com (minority owned by Fox) enforced a similar restriction, hoping to stay “neutral” in the dispute.  Despite the fact that “The Simpsons” and “Family Guy” weren’t even on this weekend (pre-empted by some sports-related programming, I guess), the viewing public was incensed, journalists wrote, and Congress expressed alarm. The blackout, at least on cable, persists.

A wide range of commentators, including Free State Foundation’s Randolph May, view the spat as further evidence of the obsolescence of the existing cable television regulatory regime.  Among other oddities left over from the days when cable was the “community antenna” for areas that couldn’t get over-the-air signals, cable providers are required to carry local programming without offering any competing content.   But local providers are not obliged to make their content available to the cable operator, or even to negotiate.

As cable technology has flourished in both content and services, the relationship between providers and content producers has mutated into something strange and often unpleasant.  Just today, Sen. John Kerry sent draft legislation to the FCC aimed at plugging some of the holes in the dyke.  That, however, is a subject for another day.

Because somehow, Public Knowledge sees the Fox-Cablevision dispute as a failure of net neutrality.  In one post, the organization “condemns” Fox for blocking Internet access to its content.  “Blocking Web sites,” according to the press release, “is totally out of bounds in a dispute like this.”  Another release called out Fox, which was said to have “committed what should be considered one of the grossest violations of the open Internet committed by a U.S. company.”

The Open Internet means everything and nothing

What “open Internet” are they talking about?  The one I’m familiar with, and the one that I thought was at the center of years of debate over federal policy, is one in which anyone who wants to can put up a website, register their domain name, and then can be located and viewed by anyone with an Internet connection.

In the long-running net neutrality debate, the principal straw man involves the potential (it’s never happened so far) for Internet access providers, especially large ones serving customers nationally, to make side deals with the operators of some websites (Google, Amazon, Microsoft, Yahoo, eBay, perhaps) to manipulate Internet traffic at the last mile on their behalf.

Perhaps for a fee, in some alternate future, Microsoft would pay extra to have search results from Bing given priority, making it look “faster” than Google.  That would encourage Google to strike a similar deal and, before you know it, only the largest content providers would appear to be worth visiting.

That would effectively end the model of the web that has worked so well, where anyone with a computer can be a publisher, and the best material has the potential to rise to the top.  Where even entrepreneurs without a garage can launch a product or service on a shoestring and, if enough users like it, catapult themselves into being the next Google, eBay, Facebook or Twitter.

What does any of this have to do with Fox’s activities over the weekend?

As Public Knowledge sees it, any interference with web content is a violation of the open Internet, even if that interference is being done by the content provider itself! Fox has programming content on both its own site and on the Hulu website, content it places there, like every other site operator, on a voluntary basis.

But, having once made that content available for viewing, according to Public Knowledge, it should be a matter of federal law that they keep it there, and not limit access to it in any way for any consumer anywhere at any time.  It’s only consumers who have rights here:  “Consumers should not have their access to Web content threatened because a giant media company has a dispute over cable programming carriage.”  (emphasis added)

On this view, it’s not content owners who have rights (under copyright and otherwise) to determine how and when their content is accessed.  Rather, it is the consumer who has an unfettered right to access any content that happens to reside on any server with an Internet connection.  Here’s the directory to everything on my computer, dear readers.  Have at it.

The “Government’s Policy” Explained

Indeed, according to PK, this remarkable view of the law has long-since been embraced by the FCC.  “We need to remember that the government’s policy is that consumers should have access to lawful content online, and that policy should not be disrupted by a programming dispute.”

Here’s how Public Knowledge retcon’s that version of “the government’s policy.”

Until this spring, the 2005 Federal Communications Commission (FCC) policy statement held that Internet users had the right to access lawful content of their choice.  There was no exception in that policy for customers who happened to have their Internet provider caught up in a nasty retransmission battle with a broadcaster.

Said policy statement that was struck down [sic] on April 6 by the U.S. Appeals Court, D.C. Circuit, when Comcast challenged the enforcement of the policy against the company for blocking users of the BitTorrent [sic].

The policy statement was based on the assumption that if there were a bad actor in preventing the consumer from seeing online content, it would be an Internet Service Provider (ISP) blocking or otherwise inhibiting access to content.  In this case, of course, it’s the content provider that was doing the blocking.  It’s a moot point now, but it shouldn’t matter who is keeping consumers away from the lawful content. (emphasis added)

Where to begin?  For starters, the policy statement was not “struck down” in the Comcast case.  The court held (courts do that, by the way, not statements of policy) that the FCC failed to identify any provision of the Communications Act that gave them the power to enforce the policy statement against Comcast.

That is all the court held.  The court said nothing about the statement itself, and even left open the possibility that there were provisions that might work but which were not cited by the agency.  (The FCC chose not to ask for a rehearing of the decision, or to appeal it to the U.S. Supreme Court.)

Moreover, there is embedded here an almost willful misuse of the phrase “lawful content.”  Lawful content means any web content other than files that users want to share with each other without license from copyright holders, including commercial software, movies, music, and documents.  None of that activity (much of what BitTorrent is still used for, by the way–the source of the Comcast case in the first place) is “lawful.”  The FCC does not want to discourage—and may indeed want to require—ISPs from interfering, blocking, and otherwise policing the access of that unlawful content.

Here, however, PK reads “lawful content” to mean content that the user has a lawful right to access, which, apparently, is all content—any file on any device connected to the Internet.

But “lawful content” does not somehow confer proprietary rights to consumers to access whatever content they like, whenever and however they like.  The owner of the content, the entity that made it available, can always decide, for any or no reason, to remove it or restrict it.   Lawful content isn’t a right for consumers—it just means something other than unlawful content.

Still, the more remarkable bit of linguistic judo is the last paragraph, in which the 2005 Open Internet policy statement becomes not a policy limiting the behavior of access providers but of absolutely everyone connected to the Internet.

The opposite is utterly clear from reading the policy statement, which addressed itself specifically to “providers of telecommunications for Internet access or Internet Protocol-enabled (IP-enabled) services.”

But that language, according to Public Knowledge, is just an “assumption.”  The FCC actually meant not just ISPs but anyone who can possibly interference with what content a user can access, which is to say anyone with a website.  When it comes to consumer access to content, it “shouldn’t matter” that the content provider herself decides to limit access.  The content, after all, is “lawful,” and therefore, no one can “[keep] consumers away” from it.

The nonsensical nature of this mangling of completely clear language to the contrary becomes even clearer if you try for a moment to take it to the next logical step.  On PK’s view, all content that was ever part of the Internet is “lawful content,” and, under the 2005 policy statement, no one is allowed to keep consumers away from it, including, as here, the actual owners of the content.

So does that mean that having put up this website (I presume the content is “lawful”), I can’t at some future date take it down, or remove some of the posts?

Well maybe the objection is just to selective limitation.  Having agreed to the social contract that comes with creating a website, I’ve agreed to an open principal (enforceable by federal law) that requires my making it freely and permanently available to anyone, anywhere, who wants to view it.  I can’t block users with certain IP addresses, whether that blocking is based on knowledge that those addresses are spammers, or residents of a country with whom I am not legally permitted to do business, or, as here, are customers of a company with whom I am engaged in a dispute over content in another channel.

But of course selective limitation of content access is a feature of every website.  You know, like the kind that comes with requiring a user to register and sign in (eBay), or accept cookies that allow the site to customize the user’s experience (Yahoo!), or pay a subscription fee to access some or all of the information (The Wall Street Journal, The New York Times), or that requires a consumer see not just the “lawful content” they want but also, side-by-side, advertising or other information that helps pay for the creation and upkeep of the site (Google, everyone else).

Or that allows a user to view a file but not to copy and resell copies of it (streaming media).  Or that limits access or use of a web service by geography (banking, gambling and other protected industries).   Or that require users to grant certain rights to the site provider to use information provided by the user (Facebook, Twitter) in exchange for use of the services.

Paradise Lost by the D.C. Circuit’s Comcast Decision

Or maybe just when Fox does it?

Under PK’s view of net neutrality, the Web is a consumer paradise, where content magically appears for purely altruistic reasons and stays forever to frolic and interact.  Fox can’t limit, even arbitrarily and capriciously, who can and cannot watch its programming on the Web.  It must make it freely available to everyone and anyone, or face condemnation by self-appointed consumer advocates who will, as prosecutor, judge and jury, convict them of having committed “the grossest violations” possible of the FCC’s open Internet policy.

That is, if only the law that PK believes represents longstanding “government policy” was still on the books.  For the real tragedy of the Fox-Cablevision dispute is that the FCC is now powerless to enforce that policy, and indeed, is powerless to stop even the “grossest violations.”

If only the D.C. Circuit hadn’t ruled against the FCC in the Comcast case, then the agency would, on this view, be able to stop Fox and Hulu from restricting access to Fox programming from Cablevision internet customers.  Or anyone else.  Ever.

That of course was never the law, and never will be.   More-or-less coincidentally, the FCC has limited jurisdiction over Fox as a broadcaster, but not to require it to make its programming available on the web, on-demand to everyone who wants to see it.

Fox aside, there is nothing in The Communications Act that could possibly be thought to extend the agency’s power to policing the behavior of all web content providers, which these days includes pretty much every single Internet user.

Nor did the Open Internet policy statement have anything to say about content providers, period.  If it had, it would have represented an ultra vires extension of the FCC’s powers that would have shamed even the most pro-regulatory cheerleader.  It would never have stood up to any legal challenge (First Amendment?  Fifth Amendment?  For starters…)

Not only does it matter but it certainly “should matter who is keeping consumers away from lawful content.”  When the “who” is the owner of the content itself, they have the right and almost certainly the need to restrict access to some or all consumers, now or in the future, without having to ask permission from the FCC.

And thank goodness.  An FCC with the power to order content providers to make content available to anyone and everyone, all the time and with no restrictions, would surely lead to a web with very little content in the first place.

Who would put any content online otherwise?  Government agencies?  Not-for-profits?  Non-U.S. users not subject to the FCC?   (But since their content would be available to U.S. consumers, who on the PK view have all the rights here, perhaps the FCC’s authority, pre-Comcast, extended to non-U.S. content providers, too.)

Not much of a web there.

No one Believes This—Including Public Knowledge

The wistful nostalgia for life before the Comcast decision is beyond misguided.  No proposal before or since would have changed the fundamental principal that open Internet rules apply to Internet access providers only.

Under the detailed net neutrality rules proposed by the FCC in 2009, for example, the Policy Statement would be extended and formalized, but would still apply only to “providers of broadband Internet access service.”  Likewise the Google-Verizon proposed legislative framework.  Likewise even the ill-advised proposal to reclassify broadband Internet access under Title II to give the FCC more authority—it’s still more authority only over access providers, not just anyone with an iPhone.

(Though perhaps PK is hanging its hopes on some worrisome language in the Title II Notice of Inquiry that might extend that authority, see “The Seven Deadly Sins of Title II Reclassification.”)

Public Knowledge has never actually proposed its own version of net neutrality legislation.  So I guess it’s possible that they’ve imagined all along that the rules would apply to content providers as well as ISPs.

Well, but the organization does have a “position” statement on net neutrality.  And guess what?  It doesn’t line up with their new-found understanding of the 2005 FCC Policy statement either.   Public Knowledge’s own position on net neutrality addresses itself solely to limits and restrictions on “network operators.”   (E.g., “Public Knowledge supports a neutral Internet where network operators may offer different levels of access at higher rates as long as that tier is offered on a nondiscriminatory basis to every other provider.”)

So apparently even Public Knowledge is among the sensible group in the net neutrality debate who reject the naïve and foolish idea that “it shouldn’t matter who is keeping consumers away from the lawful content.”

Did the rhetoric just get away from them over there, or are those who support Public Knowledge’s push for net neutrality really supporting something very different–different even than what the organization says it means by that phrase?  Something that would extend federal regulatory authority to every publisher of content on the web, including you?

I’m not sure which answer is more disturbing.

One Cheer for Patent Trolls

“On the whole, the results certainly seem to suggest that patent trolls with software patents do very much view the system as a lottery ticket, and they’re willing to use really weak patents to try to win that prize. That is not at all what the patent system is designed to do, but it’s how the incentives have been structured — and that seems like a pretty big problem that isn’t solved just by showing how many of these lawsuits fail. The amount of time and resources wasted on those lawsuits, as well as the number of companies who pay up without completing a lawsuit, suggest that there is still a major problem to be dealt with.”

So writes the always-thoughtful Mike Masnick at Techdirt.  He is referring here to a newly-published article by John R. Allison, Joshua Walker and Mark Lemley, released as a Stanford Law and Economics Olin Working Paper.  Mike has written frequently about patent trolls—companies that buy up patents from inventors and then make money by litigating or threatening to litigate against potential infringers—and never with much sympathy.

The Stanford Study

I have a less extreme view of patent trolls, about which more in a moment.  First, a few words about the study.

The Allison/Walker/Lemley paper, working with a couple of different databases of patents and litigation involving them, did a number of interesting regressions that revealed some counter-intuitive findings about the current state of patent lawsuits.

The study found that patents litigated most frequently—that is, whose holders bring lawsuits against multiple alleged infringers—are often the least likely to stand up in court.  “Once-litigated patents win in court almost 50% of the time,” the authors found, “while the most-litigated – and putatively most valuable – patents win in court only 10.7% of the time.”

Which is to say that when a patent lawsuit actually goes to trial (few do), the most frequently-asserted patents were nearly always found to be invalid in the first place.  Such patents should never have been granted by the Patent Office, either because they are obvious, non-novel, or otherwise fail to meet the criteria for a patent.  (Invalidity of the patent is a complete defense to a claim of infringement.)

The worst offenders in the study are software patents (see “Bilski:  Justice Stevens’ Last Tilt at the IP Windmills”), which accounted for almost 94% of the most often asserted patents in the study and yet were upheld as valid less than 10% of the time they actually went to trial.

Yet in most cases these patents are asserted against multiple defendants, most of whom pay settlements to avoid the time, expense, and uncertainty of a trial.  That decision, the study suggests, is a mistake.  Defendants who take these cases all the way through trial usually win; that is, they pay nothing.

Well not exactly nothing.  Even a successful litigant must pay the costs of defending her case, and that cost can run into the millions.   (In some situations, the loser must pay the winner’s costs, but under the Patent Act, fee shifting only occurs in “exceptional” cases.)

As the authors note, “It appears that as a society, we are spending a disproportionate amount of time and money litigating a class of weak patents. Our results may also have implications for our models of patent value and of rational behavior in litigation, since it appears we know quite a bit less than we thought about what makes patents valuable.”

Toward a Modest Defense of Trolling

Masnick and others take this study as further evidence—if any was needed—that patent trolls are a drain on society offering absolutely nothing but headaches, interference with innovation, and enormous wastes of money, both from litigants and the taxpayers, who underwrite the court system.  Patent trolls or “Non-Practicing Entities” (NPEs) as the authors call them, win only 9.2% of their lawsuits that go to trial.  (Only about 10%, however, go to trial, and the terms of settlements are kept confidential by both sides.)  Clearly their patents, especially the ones they assert the most frequently, are junk.

(Why would the most frequently-asserted patents be the most likely to fail a validity challenge at trial?  The broader the patent, the easier to assert it against a wide range of potential infringers, and the more likely they will be, given the breadth, to settle.  But at trial the value of a broad claim shifts—what looks scary to a defendant for the same reasons looks most dubious to the trier-of-fact.  Claims that are too broad are rejected, precisely because they represent the grant of a monopoly over too much otherwise productive economic activity.)
As I wrote in “The Laws of Disruption,” I don’t have much sympathy for patent trolls, but I don’t go quite as far as their harshest critics.  Put another way, I’m not sure I share Masnick’s conclusion that the findings of the study lead to the conclusion that “there is a still a major problem to be dealt with,” or in any case that it ought to be dealt with by reforming trolls out of the system altogether.

(For a spirited defense of trolls, see this multi-part posting.  Unfortunately the author never gives his name!)

Why the hesitation?  Even if every patent troll is a low-life individual or entity, and even if nearly all of the patents they assert are ones the patent office should never have granted in the first place, there’s still a positive benefit to society from the existence of patent trolls.

To understand why, consider how a troll becomes a troll.

Patents are granted to inventors, and the intent in giving them a 20-year monopoly on the use of their invention is to provide a market biased in their favor.  They can either commercialize the invention themselves without fear of competition, or sell or license the invention to others to do the same.

Keeping competitors away, albeit for a limited time, gives the inventor the chance to recover their up-front investment in making the invention.  In some cases, inventors toil at their own expense for years before coming up with anything new (if ever), and even then the potential market for their invention may be small or non-existent.

Granting a patent goes against the otherwise free market orientation of capitalist economies, but is thought to be a necessary evil.  If inventors don’t believe they’ll have protected markets, they may not undertake the risk and cost of inventing.  And if they don’t, important inventions may be delayed or lost.  If that happens, everyone loses.  That, in any case, is the theory behind patents.

But a patent troll, by definition, has done no inventing and has no intention of commercializing the inventions they buy.  They simply sue or threaten to sue companies they believe are using the invention (intentionally or, more likely, unintentionally), extracting tribute in the form of forced licenses or other damages.

So what positive role do they play in the system?

Consider how a troll gets a patent in the first place.  In the simplest case, an inventor finds they cannot afford to commercialize their invention, or doesn’t have the risk or managerial profile necessary to try.  Perhaps they try to sell the invention to a company in the industry who can make use of it, or offer to license the invention to several such companies.  The inventor may be rebuffed or ignored or offered a price too low to keep her in the business of inventing.  Maybe the invention isn’t worth the investment already made, or maybe the company fails to evaluate its potential accurately or even at all.

Or maybe the company, knowing that the inventor lacks the resources not only to commercialize but also to protect her invention, takes the chance of ignoring the patent and continues to operate as before, even if that means infringing the patent.

Well, why not?  The inventor’s claim may be no good, or may not cover the company’s behavior.  But even if there is infringement, the road to proving it is long, expensive, and requires a skill set in litigation, negotiation and the substantive law of patents the inventor almost surely doesn’t have and, perhaps, can’t afford to engage.

So as a last resort, the inventor sells the patent to an NPE.  The NPE may buy up many patents, perhaps for related inventions, in the hopes that the combined pool includes at least some that are both valid and cover some unlicensed behavior in industry—or at least that a threat that they do will be credible.  They assert these patents against whatever defendants will most likely be induced to settle, balancing the potential settlements against the probability of incurring the costs of litigation, perhaps all the way to a trial.

As the Stanford paper suggests, in the vast majority of cases the authors studied the asserted patents were in fact junk, at least as determined at trial (judge and jury may have their own biases, of course).  The inventors shouldn’t have gotten anything for them, either from the defendants or from the patent troll, because the patent never should have been granted in the first place.  Again, the trolls may know better than the study suggests the real value of their holdings, and may be betting that the transaction costs of litigation will encourage defendants to settle anyway.

That bet is a game of chicken, for if the defendant chooses to litigate then both sides must absorb heavy litigation costs no matter who wins—the troll bets that the defendant will simply pay them to go away.

Patent trolls may make most of their money, in other words, from arbitraging the inefficiencies and failings of the current patent system.

But even if this is so, there is still value to the system and to society from the existence (if not the individual behaviors) of patent trolls.  For without them, potential defendants have no incentive to deal with inventors who want to sell or license their inventions, even valid ones.  Absent patent trolls, the companies would conclude the inventor can’t litigate regardless of the validity of the claim, a reality the inventor always would know.

Without the existence of patent trolls as a buyer of last resort, there’s no credible threat the inventor can make, and a rational defendant will simply carry on knowing the patent can’t be successfully enforced.  Knowing this set of facts, inventors at the margins may not undertake their research in the first place.

So even if every non-practicing entity is a troll, and even if every troll-asserted patent is garbage, the role in the system played by the existence of trolls is an important one.

Whether it justifies its cost today is another matter, one tied hopelessly to the other weaknesses and dysfunctions of the overall patent system.  Speaking generally, the authors conclude that “it is important to recognize that software patents and patents asserted by NPEs are both taking disproportionate resources in patent litigation, and that the social benefit from those cases appears to be slight.”  But they stop well short of calling for reforms that would eliminate the incentives that keep NPEs in the game.

That caution, for now, seems sensible.

The end of software ownership




My article for CNET this morning, “The end of software ownership…and why to smile,” looks at the important decision a few weeks ago in the Ninth Circuit copyright case, Vernor v. Autodesk.  (See also excellent blog posts on Eric Goldman’s blog. Unfortunately these posts didn’t run until after I’d finished the CNET piece.)

The CNET article took the provocative position that Vernor signals the eventual (perhaps imminent) end to the brief history of users “owning” “copies” of software that they “buy,” replacing the regime of ownership with one of rental.  And, perhaps more controversially still, I try to make the case that such a dramatic change is in fact not, as most commentators of the decision have concluded, a terrible loss for consumers but a liberating victory.

I’ll let the CNET article speak for itself.  Here I want to make a somewhat different point about the case, which is that the “ownership” regime was always an aberration, the result of an unfortunate need to rely on media to distribute code (until the Internet) coupled with a very bad decision back in 1976 to extend copyright protection to software in the first place.

The Vernor Decision, Briefly

First, a little background.

The Vernor decision, in brief, took a big step in an on-going move by the federal courts to allow licensing agreements to trump user rights reserved by the Copyright Act.  In the Vernor case, the most important of those rights was at issue:  the right to resell used copies.

Vernor, an eBay seller of general merchandise, had purchased four used copies of an older version of AutoCAD from a small architectural firm at an “office sale.”

The firm had agreed in the license agreement not to resell the software, and had reaffirmed that agreement when it upgraded its copies to a new version of the application.  Still, the firm sold the media of the old versions to Vernor, who in turn put them up for auction on eBay.

Autodesk tried repeatedly to cancel the auctions, until, when Vernor put the fourth copy up for sale, eBay temporarily suspended his account.  Vernor sued Autodesk, asking the court for a declaratory judgment (essentially a preemptive lawsuit) that as the lawful owner of a copy of AutoCAD, he had the right to resell it.

A lower court agreed with Vernor, but the Ninth Circuit reversed, and held that the so-called “First Sale Doctrine,” codified in the Copyright Act, did not apply because the architectural firm never bought a “copy” of the application.  Instead, the firm had only paid to use the software under a license from Autodesk, a license the firm had clearly violated.  Since the firm never owned the software, Vernor acquired no rights under copyright when he purchased the disks.

The Long Arm of Vernor?

This is an important decision, since all commercial software (and even open source and freeware software) is enabled by the producer only on condition of acceptance by the user of a license agreement.

These days, nearly all licenses purport to restrict the user’s ability to resell the software without permission from the producer.  (In the case of open source software under the GPL, users can redistribute the software so long as they repeat the other limits, including the requirement that modifications to the software also be distributed under the GPL.)  Thus, if the Vernor decision stands, used markets for software will quickly disappear.

Moreover, as the article points out, there’s no reason to think the decision is restricted just to software.  The three-judge panel suggested that any product—or at least any information-based product—that comes with a license agreement is in fact licensed rather than sold.  Thus, books, movies, music and video games distributed electronically in software-like formats readable by computers and other devices are probably all within the reach of the decision.

Who knows?  Perhaps Vernor could be applied to physical products—books, toasters, cars—that are conveyed via license.  Maybe before long consumers won’t own anything anymore; they’ll just get to use things, like seats at a movie theater (the classic example of a license), subject to limits imposed—and even changed at will—by the licensor.  We’ll become a nation of renters, owning nothing.

Well, not so fast.  First of all, let’s note some institutional limits of the decision.  The Ninth Circuit’s ruling applies only within federal courts of the western states (including California and Washington, where this case originated).  Other circuits facing similar questions of interpretation may reach different or even opposite decisions.

Vernor may also appeal the decision to the full Ninth Circuit or even the U.S. Supreme Court, though in both cases the decision to reconsider would be at the discretion of the respective court.  (My strong intuition is that the Supreme Court would not take an appeal on this case.)

Also, as Eric Goldman notes, the Ninth Circuit already has two other First Sale Doctrine cases in the pipeline.  Other panels of the court may take a different or more limited view.

For example, the Vernor case deals with a license that was granted by a business (Autodesk) to another business (the architectural firm).  But courts are often hesitant to enforce onerous or especially one-sided terms of a contract (a license is a kind of contract) between a business and an individual consumer.  Consumers, more than businesses, are unlikely to be able to understand the terms of an agreement, let alone have any realistic expectation of negotiating over terms they don’t like.

Courts, including the Ninth Circuit, may decline to extend the ruling to other forms of electronic content, let alone to physical goods.

The Joy of Renting

So for now let’s take the decision on its face:  Software licensing agreements that say the user is only licensing the use of software rather than purchasing a copy are enforceable.  Such agreements require only a few “magic words” (to quote the Electronic Frontier Foundation’s derisive view of the opinion) to transform software buyers into software renters.  And it’s a safe bet that any existing End User Licensing Agreements (EULAs) that don’t already recite those magic words will be quickly revised to do so.

(Besides EFF, see scathing critiques of the Vernor decision at Techdirt and Wired.)

So.  You don’t own those copies of software that you thought you purchased.  You just rent it from the vendor, on terms offered on a take-it-or-leave-it basis and subject to revision at will.  All those disks sitting in all those cardboard albums sitting on a shelf in your office are really the property of Microsoft, Intuit, Activision, and Adobe.  You don’t have to return them when the license expires, but you can’t transfer ownership of them to someone else because you don’t own them in the first place.

Well, so what?  Most of those boxes are utterly useless within a very short period of time, which is why there never has been an especially robust market for used software.  What real value is there to a copy of Windows 98, or last year’s TurboTax, or Photoshop Version 1.0?

Why does software get old so quickly, and why is old software worthless?  To answer those questions, I refer in the article to an important 2009 essay by Kevin Kelly.  Kelly, for one, thinks the prospect of renting rather than owning information content is not only wonderful but inevitable, and not because courts are being tricked into saying so.  (Kelly’s article says nothing about the legal aspects of ownership and renting.)

Renting is better for consumers, Kelly says, because ownership of information products introduces significant costs and absolutely no benefits to the consumer.  Once content is transformed into electronic formats, both the media (8-track) and the devices that play them (Betamax) grow quickly obsolete as technology improves under the neutral principle of Moore’s Law.  So if you own the media you have to store it, maintain it, catalog it and, pretty soon, replace it.  If you rent it, just as any tenant, those costs are borne by the landlord.

Consumers who own libraries of media find themselves regularly faced with the need to replace them with new media if they want to take advantage of the new features and functions of new media-interpreting devices.  You’re welcome to keep the 78’s that scratch and pop and hiss, but who really wants to?  Nostalgia only goes so far, and only for a unique subset of consumers.  Most of us like it when things get better, faster, smaller, and cheaper.

In the case of software, there’s the additional and rapid obsolescence of the code itself.  Operating systems have to be rewritten as the hardware improves and platforms proliferate.  Tax preparation software has to be replaced every year to keep up with the tax code.  Image manipulation software gets ever more sophisticated as display devices are radically improved.

Unlike a book or a piece of music, software is only written for the computer to “read” in the first place.  You can always read an old book, whether you prefer the convenience of a mass storage device such as a Kindle.  But you could never read the object code for AutoCAD even if you wanted to—the old version (which got old fast, and not just to encourage you to buy new versions) is just taking up space in your closet.

The Real Crime was Extending Copyright to Software in the First Place

In that sense, it never made any sense to “own” “copies” of software in the first place.  That was only the distribution model for a short time, necessitated by an unfortunate technical limit of computer architecture that has nearly disappeared.  CPUs require machine-readable code to be moved into RAM in order to be executed.

But core memory was expensive.  Code came loaded on cheap tape, which was then copied to more expensive disks, which was then read into even more expensive memory.  In a perfect world with unlimited free memory, the computer would have come pre-loaded with everything.

That wouldn’t have solved the obsolescence problem, however.  But the Internet solved that by eliminating the need for the physical media copies in the first place.  Nearly all the software on my computer was downloaded in the first place—if I got a disk, it was just to initiate the download and installation.  (The user manual, the other component of the software album, is only on the disk or online these days.)

As we move from physical copies to downloaded software, vendors can more easily and more quickly issue new versions, patches, upgrades, and added functionality (new levels of video games, for example).

And, as we move from physical copies to virtual copies residing in the cloud, it becomes increasingly less weird to think that the thing we paid for—the thing that’s sitting right there, in our house or office—isn’t really ours at all, even though we paid for, bagged it, transported and unwrapped it just as do all the other commodities that we do own.

That’s why the Vendor decision, in the end, isn’t really all that revolutionary.  It just acknowledges in law what has already happened in the market.  We don’t buy software.  We pay for a service—whether by the month, or by the user, or by looking at ads, or by the amount of processing or storage or whatever we do with the service—and regardless of whether the software that implements the service runs on our computer or someone else’s, or, for that matter, everyone else’s.

The crime here, if there is one, isn’t that the courts are taking away the First Sale Doctrine.  It’s not, in other words, that one piece of copyright law no longer applies to software.  The crime is that copyright—any part of it—every applied to software in the first place.  That’s what led to the culture of software “packages” and “suites” and “owning copies” that was never a good fit, and which now has become more trouble than it’s worth.

Remember that before the 1976 revisions to the Copyright Act, it was pretty clear that software wasn’t protected by copyright.  Until then, vendors (there were very few, and, of course, no consumer market) protected their source code either by delivering only object code and/or by holding user’s to the terms of contracts based on the law of trade secrets.

That regime worked just fine.  But vendors got greedy, and took the opportunity of the 1976 reforms to lobby for extension of copyright for source code.  Later, they got greedier, and chipped away at bans on applying patent law to software as well.

Not that copyright or patent protection really bought the vendors much.  Efforts to use it to protect the “look and feel” of user interfaces, as if they were novels that read too closely to an original work, fell flat.

Except when it came to stopping the wholesale reproduction and unauthorized sale of programs in other countries, copyright protection hasn’t been of much value to vendors.  And even then the real protection for software was and remains the rapid revision process driven by technological, rather than business or legal, change.

But the metaphor equating software with novels had unintended consequences.  With software protected by copyright, users—especially consumers—became accustomed to the language of copies and ownership and purchase, and to the protections of the law of sales, which applies to physical goods (books) and not to services (accounting).

So, if consumer advocates and legal scholars are enraged by the return to a purely contractual model for software use, in some sense the vendors have only themselves—or rather their predecessors—to blame.

But that doesn’t change the fact that software never fit the model of copyright, including the First Sale Doctrine.  Just because source code kind of sort of looked like it was written in a language readable by a very few humans, the infamous CONTU Committee making recommendations to Congress made the leap to treating software as a work of authorship by (poor) analogy.

With the 1976 Copyright Act, the law treated software as if it were a novel, giving exclusive rights to its “authors” for a period of time that is absurd compared to the short economic lifespan of any piece of code written since the time of Charles Baggage and Ada Lovelace.

The farther away from a traditional “work of authorship” that software evolves (visual programming, object-oriented architecture, interpretive languages such as HTML), the more unfortunate that decision looks in retrospect.  Source code is just a convenience, making it easier to write and maintain programs.  But it doesn’t do anything.  It must be compiled or interpreted before the hardware will make a peep or move a pixel.

Author John Hersey, one of the CONTU Committee members, got it just right.  In his dissent from the recommendation to extend copyright to software, Hersey wrote, “software utters work.  Work is its only utterance and its only purpose.”

Work doesn’t need the incentives and protections we have afforded to novels and songs.  And consumers can no more resell work than they can take home their seat from the movie theater after the show.