Category Archives: Copyright

After the deluge, more deluge

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET News.com and The Washington Post.

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

If the size of my stacks are any indication of activity level, the most contentious areas of legal debate are, not surprisingly, privacy (Facebook, Google, Twitter et. al.), infrastructure (Net neutrality, Title II and the wireless spectrum crisis), copyright (the secret ACTA treaty, Limewire, Google v. Viacom), free speech (China, Facebook “hate speech”), and cyberterrorism (Sen. Lieberman’s proposed legislation expanding executive powers).

There was relatively little development in other key topics, notably antitrust (Intel and the Federal Trade Commission appear close to resolution of the pending investigation; Comcast/NBC merger plodding along).  Cyberbullying, identity theft, spam, e-personation and other Internet crimes have also gone eerily, or at least relatively, quiet.

Where are We?

There’s one thing that all of the high-volume topics have in common—they are all moving increasingly toward a single topic, and that is the appropriate balance between private and public control over the Internet ecosystem.  When I first started researching cyberlaw in the mid-1990’s, that was truly an academic question, one discussed by very few academics.

But in the interim, TCP/IP, with no central authority or corporate owner, has pursued a remarkable and relentless takeover of every other networking standard.  The Internet’s packet-switched architecture has grown from simple data file exchanges to email, the Web, voice, video, social network and the increasingly hybrid forms of information exchanges performed by consumers and businesses.

As its importance to both economic and personal growth has expanded, anxiety over how and by whom that architecture is managed has understandably developed in parallel.

(By the way, as Morgan Stanley analyst Mark Meeker pointed out this spring, consumer computing has overtaken business computing as the dominant use of information technology, with a trajectory certain to open a wider gap in the future.)

The locus of the infrastructure battle today, of course, is in the fundamental questions being asked about the very nature of digital life.  Is the network a piece of private property operated subject to the rules of the free market, the invisible hand, and a wondrous absence of transaction costs?  Or is it a fundamental element of modern citizenship, overseen by national governments following their most basic principles of governance and control?

At one level, that fight is visible in the machinations between governments (U.S. vs. E.U. vs. China, e.g.) over what rules apply to the digital lives of their citizens.  Is the First Amendment, as John Perry Barlow famously said, only a local ordinance in Cyberspace?  Do E.U. privacy rules, being the most expansive, become the default for global corporations?

At another level, the lines have been drawn even sharper between public and private parties, and in side-battles within those camps.  Who gets to set U.S. telecom policy—the FCC or Congress, federal or state governments, public sector or private sector, access providers or content providers?  What does it really mean to say the network should be “nondiscriminatory,” or to treat all packets anonymously and equally, following a “neutrality” principle?

As individuals, are we consumers or citizens, and in either case how do we voice our view of how these problems should be resolved?  Through our elected representatives?  Voting with our wallets?  Through the media and consumer advocates?

Not to sound too dramatic, but there’s really no other way to see these fights as anything less than a struggle for the soul of the Internet.  As its importance has grown, so have the stakes—and the immediacy—in establishing the first principles, the Constitution, and the scriptures that will define its governance structure, even as it continues its rapid evolution.

The Next Wave

Network architecture and regulation aside, the other big problems of the day are not as different as they seem.  Privacy, cybersecurity and copyright are all proxies in that larger struggle, and in some sense they are all looking at the same problem through a slightly different (but equally mis-focused) lens.  There’s a common thread and a common problem:  each of them represents a fight over information usage, access, storage, modification and removal.  And each of them is saddled with terminology and a legal framework developed during the Industrial Revolution.

As more activities of all possible varieties migrate online, for example, very different problems of information economics have converged under the unfortunate heading of “privacy,” a term loaded with 19th and 20th century baggage.

Security is just another view of the same problems.  And here too the debates (or worse) are rendered unintelligible by the application of frameworks developed for a physical world.  Cyberterror, digital warfare, online Pearl Harbor, viruses, Trojan Horses, attacks—the terminology of both sides assumes that information is a tangible asset, to be secured, protected, attacked, destroyed by adverse and identifiable combatants.

In some sense, those same problems are at the heart of struggles to apply or not the architecture of copyright created during the 17th Century Enlightenment, when information of necessity had to take physical form to be used widely.  Increasingly, governments and private parties with vested interests are looking to the ISPs and content hosts to act as the police force for so-called “intellectual property” such as copyrights, patents, and trademarks.  (Perhaps because it’s increasingly clear that national governments and their physical police forces are ineffectual or worse.)

Again, the issues are of information usage, access, storage, modification and removal, though the rhetoric adopts the unhelpful language of pirates and property.

So, in some weird and at the same time obvious way, net neutrality = privacy = security = copyright.  They’re all different and equally unhelpful names for the same (growing) set of governance issues.

At the heart of these problems—both of form and substance—is the inescapable fact that information is profoundly different than traditional property.  It is not like a bush or corn or a barrel of oil.  For one thing, it never has been tangible, though when it needed to be copied into media to be distributed it was easy enough to conflate the media for the message.

The information revolution’s revolutionary principle is that information in digital form is at last what it was always meant to be—an intangible good, which follows a very different (for starters, a non-linear) life-cycle.  The ways in which it is created, distributed, experienced, modified and valued don’t follow the same rules that apply to tangible goods, try as we do to force-fit those rules.

Which is not to say there are no rules, or that there can be no governance of information behavior.  And certainly not to say information, because it is intangible, has no value.  Only that for the most part, we have no real understanding of what its unique physics are.  We barely have vocabulary to begin the analysis.

Now What?

Terminology aside, I predict with the confidence of Moore’s Law that business and consumers alike will increasingly find themselves more involved than anyone wants to be in the creation of a new body of law better-suited to the realities of digital life.  That law may take the traditional forms of statutes, regulations, and treaties, or follow even older models of standards, creeds, ethics and morals.  Much of it will continue to be engineered, coded directly into the architecture.

Private enterprises in particular can expect to be drawn deeper (kicking and screaming perhaps) into fundamental questions of Internet governance and information rights.

Infrastructure and application providers, as they take on more of the duties historically thought to be the domain of sovereigns, are already being pressured to maintain the environmental conditions for a healthy Internet.  Increasingly, they will be called upon to define and enforce principles of privacy and human rights, to secure the information environment from threats both internal (crime) and external (war), and to protect “property” rights in information on behalf of “owners.”

These problems will continue to be different and the same, and will be joined by new problems as new frontiers of digital life are opened and settled.  Ultimately, we’ll grope our way toward the real question:  what is the true nature of information and how can we best harness its power?

Cynically, it’s lifetime employment for lawyers.  Optimistically, it’s a chance to be a virtual founding father.  Which way you look at it will largely determine the quality of the work you do in the next decade or so.

The Seven Deadly Sins of Title II Reclassification (NOI Remix)

Better late than never, I’ve finally given a close read to the Notice of Inquiry issued by the FCC on June 17th.  (See my earlier comments, “FCC Votes for Reclassification, Dog Bites Man”.)  In some sense there was no surprise to the contents; the Commission’s legal counsel and Chairman Julius Genachowski had both published comments over a month before the NOI that laid out the regulatory scheme the Commission now has in mind for broadband Internet access.

Chairman Genachowski’s “Third Way” comments proposed an option that he hoped would satisfy both extremes.  The FCC would abandon efforts to find new ways to meet its regulatory goals using “ancillary jurisdiction” under Title I (an avenue the D.C. Circuit had wounded, but hadn’t actually exterminated, in the Comcast decision), but at the same time would not go as far as some advocates urged and put broadband Internet completely under the telephone rules of Title II.

Instead, the Commission would propose a “lite” version of Title II, based on a few guiding principles:

  • Recognize the transmission component of broadband access service—and only this component—as a telecommunications service;
  • Apply only a handful of provisions of Title II (Sections 201, 202, 208, 222, 254, and 255) that, prior to the Comcast decision, were widely believed to be within the Commission’s purview for broadband;
  • Simultaneously renounce—that is, forbear from—application of the many sections of the Communications Act that are unnecessary and inappropriate for broadband access service; and
  • Put in place up-front forbearance and meaningful boundaries to guard against regulatory overreach.

The NOI pretends not to take a position on any of three possible options – (1) stick with Title I and find a way to make it work, (2) reclassify broadband and apply the full suite of Title II regulations to Internet access providers, or (3) compromise on the Chairman’s Third Way, applying Title II but forbearing on any but the six sections noted above—at least, for now (see ¶ 98).  It asks for comments on all three options, however, and for a range of extensions and exceptions within each.

I’ve written elsewhere (see “Reality Check on ‘Reclassifying’ Broadband” and  “Net Neutrality and the Inconvenient Constitution”) about the dubious legal foundation on which the FCC rests its authority to change the definition of “information services” to suddenly include broadband Internet, after successfully (and correctly) convincing the U.S. Supreme Court that it did not.  That discussion will, it seems, have to wait until its next airing in federal court following inevitable litigation over whatever course the FCC takes.

This post deals with something altogether different—a number of startling tidbits that found their way into the June 17th NOI.  As if Title II weren’t dangerous enough, there are hints and echoes throughout the NOI of regulatory dreams to come.  Beyond the hubris of reclassification, here are seven surprises buried in the 116 paragraphs of the NOI—its seven deadly sins.  In many cases the Commission is merely asking questions.  But the questions hint at a much broader—indeed overwhelming—regulatory agenda that goes beyond Net Neutrality and the undoing of the Comcast decision.

Pride:  The folly of defining “facilities-based” provisioning – The FCC is struggling to find a way to apply reclassification only to the largest ISPs – Comcast, AT&T, Verizon, Time Warner, etc.  But the statutory definition of “telecommunications” doesn’t give them much help.  So the NOI invents a new distinction, referred to variously as “facilities-based” providers (¶ 1) or providers of an actual “physical connection,” (¶ 106) or limiting the application of Title II just to the “transmission component” of a provider’s consumer offering (¶ 12).

All the FCC has in mind here is “a commonsense definition of broadband Internet service,” (¶ 107) (which they never provide), but in any case the devil is surely in the details.  First, it’s not clear that making that distinction would actually achieve the goal of applying the open Internet rules—network management, good or evil, largely occurs well above the transmission layers in the IP stack.

The sin here, however, is that of unintentional over-inclusion.  If Title II is applied to “facilities-based” providers, it could sweep in application providers who increasingly offer connectivity as a way to promote usage of their products.

Limiting the scope of reclassification just to “facilities-based” providers who sell directly to consumers doesn’t eliminate the risk of over-inclusion.  Some application providers, for example, offer a physical connection in partnership with an ISP (think Yahoo and Covad DSL service) and many large application providers own a good deal of fiber optic cable that could be used to connect directly with consumers.  (Think of Google’s promise to build gigabit test beds for select communities.)  Municipalities are still working to provide WiFi and WiMax connections, again in cooperation with existing ISPs.  (EarthLink planned several of these before running into financial and, in some cities, political trouble.)

There are other services, including Internet backbone provisioning, that could also fall into the Title II trap (see ¶ 64).  Would companies, such as Akamai, which offer caching services, suddenly find themselves subject to some or all of Title II?  (See ¶ 58)  How about Internet peering agreements (unmentioned in the NOI)?  Would these private contracts be subject to Title II as well?  (See ¶ 107)

Lust:  The lure of privacy, terrorism, crime, copyright – Though the express purpose of the NOI is to find a way to apply Title II to broadband, the Commission just can’t help lusting after some additional powers it appears interested in claiming for itself.  Though the Commissioners who voted for the NOI are adamant that the goal of reclassification is not to regulate “the Internet” but merely broadband access, the siren call of other issues on the minds of consumers and lawmakers may prove impossible to resist.

Recognizing, for example, that the Federal Trade Commission has been holding hearings all year on the problems of information privacy, the FCC now asks for comments about how it can use Title II authority to get into the game (¶ 39, 52, 82, 83, 96), promising of course to “complement” whatever actions the FTC is planning to take.

Cyberattacks and other forms of terrorism are also on the Commission’s mind.  In his separate statement, for example, Chairman Genachowski argues that the Comcast decision “raises questions about the right framework for the Commission to help protect against cyber-attacks.”

The NOI includes several references to homeland security and national defense—this in the wake of publicity surrounding Sen. Lieberman’s proposed law to give the President extensive emergency powers over the Internet.  (See Declan McCullaugh, “Lieberman Defends Emergency Net Authority Plan.”)  Lieberman’s bill puts the power squarely in the Department of Homeland Security—is the FCC hoping to use Title II to capture some of that power for itself?

And beyond shocking acts of terrorism, does the FCC see Title II as a license to require ISPs to help enforce other, lesser crimes, including copyright infringement, libel, bullying and cyberstalking, e-personation—and the rest?  Would Title II give the agency the ability to impose its content “decency” rules, limited today merely to broadcast television and radio, to Internet content, as Congress has unsuccessfully tried to help the Commission do on three separate occasions?

(Just as I wrote that sentence, the U.S. Court of Appeals for the Second Circuit ruled that the FCC’s recent effort to craft more aggressive indecency rules, applied to Janet Jackson’s nipple, violates the First Amendment.  The Commission is having quite a bad year in the courts!)

Anger:  Sharing the pain of CALEA – That last paragraph is admittedly speculation.  The NOI contains no references to copyright, crime, or indecency.  But here’s a law enforcement sin that isn’t speculative.  The NOI reminds us that separate from Title II, the FCC is required by law to enforce the Communications Assistance for Law Enforcement Act (CALEA). (¶ 89) CALEA is part of the rich tapestry of federal wiretap law, and requires “telecommunications carriers” to implement technical “back doors” that make it easier for federal law enforcement agencies to execute wiretapping orders.  Since 2005, the FCC has held that all facilities-based providers are subject to CALEA.

Here, the Commission assumes that reclassification would do nothing to change the broader application of CALEA already in place, and seeks comment on “this analysis.”  (¶ 89)  The Commission wonders how that analysis impacts its forbearance decisions, but I have a different question.  Assuming the definition of “facilities-based” Internet access providers is as muddled as it appears (see above), is the Commission intentionally or unintentionally extending the coverage of CALEA to anyone selling Internet “connectivity” to consumers, even those for whom that service is simply in the interest of promoting applications?

Again, would residents of communities participating in Google’s fiber optic test bed awake to discover that all of that wonderful data they are now pumping through the fiber is subject to capture and analysis by any law enforcement officer holding a wiretapping order?  Oops?

Gluttony:  The Insatiable Appetite of State and Local Regulators – Just when you think the worst is over, there’s a nasty surprise waiting at the end of the NOI.  Under Title II, the Commission reminds us, many aspects of telephone regulation are not exclusive to the FCC but are shared with state and even local regulatory agencies. 

Fortunately, to avoid the catastrophic effects of imposing perhaps hundreds of different and conflicting regulatory schemes to broadband Internet access, the FCC has the authority to preempt state and local regulations that conflict with FCC “decisions,” and to preempt the application of those parts of Title II the FCC may or may not forbear. 

But here’s the billion dollar question, which the NOI saves for the very last (¶ 109):  “Under each of the three approaches, what would be the limits on the states’ or localities’ authority to impose requirements on broadband Internet service and broadband Internet connectivity service?”

What indeed?  One of the provisions the FCC would not apply under the Third Way, for example, is § 253, which gives the Commission the authority to “preempt state regulations that prohibit the provision of telecommunications services.” (¶ 87)  So does the Third Way taketh federal authority only to giveth to state and local regulators?  Is the only way to avoid state and local regulations—oh, well, if you insist–to go to full Title II?  And might the FCC decide in any case to exercise their discretion, now or in the future, to allow local regulations of Internet connectivity?

What might those regulations look like?  One need only review the history of local telephone service to recall the rate-setting labyrinths, taxes, micromanagement of facilities investment and deployment decisions—not to mention the scourge of corruption, graft and other government crimes that have long accompanied the franchise process.  Want to upgrade your cable service?  Change your broadband provider?  Please file the appropriate forms with your state or local utility commission, and please be patient.

Fear-mongering?  Well, consider a proposal that will be voted on this summer at the annual meeting of the National Association of Utilities Commissioners.  (TC-1 at page 30)  The Commissioners will decide whether to urge the FCC to adopt what it calls a “fourth way” to fix the Net Neutrality problem.  Their description of the fourth way speaks for itself.  It would consist of:

“bi-jurisdictional regulatory oversight for broadband Internet connectivity service and broadband Internet service which recognizes the particular expertise of States in: managing front-line consumer education, protection and services programs; ensuring public safety; ensuring network service quality and reliability; collecting and mapping broadband service infrastructure and adoption data; designing and promoting broadband service availability and adoption programs; and implementing  competitively neutral pole attachment, rights-of-way and tower siting rules and programs.”

The proposal also asks the FCC, should it stick to the Third Way approach, to add in several other provisions left out of Chairman Genachowski’s list, including one (again, § 253) that would preserve the state’s ability to help out.

Or consider a proposal currently being debated by the California Public Utilities Commission.  California, likewise, would like to use reclassification as the key that unlocks the door to “cooperative federalism,” and has its own list of provisions the FCC ought not to forbear under the Third Way proposal.

Among other things, the CPUC’s general counsel is unhappy with the definition the FCC proposes for just who and what would be covered by Title II reclassification.  The CPUC proposal argues for a revised definition that “should be flexible enough to cover unforeseen technological [sic] in both the short- and long-term.”

The CPUC also proposes the FCC add to the list of those regulated by Title II providers Voice over Internet Protocol telephony, which is often a software application riding well above the “transmission” component of broadband access.

California is just the first (tax-starved) state I looked for.  I’m sure there are and will be others who will respond hungrily to the Commission’s invitation to “comment” on the appropriate role of state and local regulators under either a full or partial Title II regime.  (¶ 109, 110)

Sloth:  The sleeping giant of basic web functions – browsers, DNS lookup, and more – The NOI admits that the FCC is a bit behind the times when it comes to technical expertise, and they would like commenters to help them build a fuller record.  Specifically, ¶ 58 asks for help “to develop a current record on the technical and functional characteristics of broadband Internet service, and whether those characteristics have changed materially in the last decade.”

In particular, the NOI wants to know more about the current state of web browsers, DNS lookup services, web caching, and “other basic consumer Internet activities.”

Sounds innocent enough, but those are very loaded questions.  In the Brand X case, in which the U.S. Supreme Court agreed with the FCC that broadband Internet access over cable fit the definition of a Title I “information service” and not a Title II “telecommunications service,” browsers, DNS lookup and other “basic consumer Internet activities” were crucial to the analysis of the majority.  Because cable (and, later, it was decided, DSL) providers offered not simply a physical connection but also supporting or “enhanced” services to go with it—including DNS lookup, home pages, email support and the like—their offering to consumers was not simple common carriage.

Justice Scalia disagreed, and in dissent made the argument that cable Internet was in fact two separable offerings – the physical connection (the packet-switched network) and a set of information services that ran on top of that connection.  Consumers used some information services from the carrier, and some from other content providers (other web sites, e.g.).  Those information services were rightly left unregulated under Title I, but Congress intended the transmission component, according to Justice Scalia, to be treated as a common carrier “telecommunications service” under Title II.

The Third Way proposal in large part adopts the Scalia view of the Communications Act (see ¶ 20, 106), despite the fact that it was the FCC who argued vigorously against that view all along, and despite the fact that a majority of the Court agreed with them.

By asking these innocent questions about technical architecture, the FCC appears to be hedging its bets for a certain court challenge.   Any effort to reclassify broadband Internet access will generate long, complicated, and expensive litigation.  What, the courts will ask, has driven the FCC to make such an abrupt change in its interpretation of terms like “information service” whose statutory definitions haven’t changed since 1996?

We know it is little more than that the Chairman would like to undo the Comcast decision, of course, and thereafter complete the process of enrolling the open Internet rules proposed in October.  But in the event that proves an unavailing argument, it would be nice to be able to argue that the nature of the Internet and Internet access have fundamentally changed since 2005, when Brand X was decided.  If it’s clear that basic Internet services have become more distinct from the underlying physical connection, at least in the eyes of consumers, so much the better.

Or perhaps something bigger is lumbering lazily through the NOI.  Perhaps the FCC is considering whether “basic Internet activities” (browsing, searching, caching, etc.) have now become part of the definition of basic connectivity.  Perhaps Title II, in whole or in part, will apply not only to facilities-based providers, but to those who offer basic Internet services essential for web access.  (Why extend Title II to providers of “basic” information service?  See below, “Greed.”)  If so, the exception will swallow the rule, and just about everything else that makes the Internet ecosystem work.

Vanity:  The fading beauty of the cellular ingénue – Perhaps the most worrisome feature of the proposed open Internet rules is that they would apply with equal force to wired and wireless Internet access.  As any consumer knows, however, those two types of access couldn’t be more different. 

Infrastructure providers have made enormous progress in innovating improvements to existing infrastructure—especially the cable and copper networks.  New forms of access have also emerged, including fiber optic cable, satellite, WiFi/WiMax, and the nascent provisioning of broadband over power lines, which has particular promise in remote areas which may have no other option for access.

Broadband speeds are increasing, and there’s every expectation that given current technology and current investment plans, the National Broadband Plan’s goal of 100 million Americans with access to 100 mbps Internet speeds by 2010 will be reached without any public spending.

The wireless world, however, is a different place.  After years of underutilization of 3G networks by consumers who saw no compelling or “killer” apps worth using, the latest generation of portable computing devices (iPhone, Android, Blackberry, Windows) has reached the tipping point and well beyond.  Existing networks in many locations are overcommitted, and political resistance to additional cell tower and other facilities deployment is exacerbating the problem.

Just last week, a front page story in the San Francisco Chronicle reported on growing tensions between cell phone providers and residents who want new towers located anywhere but near where they live, go to school, shop, or work.  CTIA-The Wireless Association announced that it would no longer hold events in San Francisco, after the city council, led by Mayor Gavin Newsome, passed a “Cell Phone Right to Know” ordinance that requires retail disclosure of a phone’s specific adoption rate of emitted radiation.

Given the likely continued lagging of cellular deployment, it seems prudent to consider less stringent restrictions on network management for wireless than for wireline.  Under the open Internet rules, providers would be unable to limit or ban outright certain high-bandwidth data services, notably video services and peer-to-peer file sharing, that the network may simply be unable to support.  But the proposed open Internet rules will have none of that.

The NOI does note some of the significant differences between wired and wireless (¶ 102), but also reminds us that the limited spectrum for wireless signals affords them special powers to regulate the business practices of providers. (¶ 103)  Under Title III of the Communications Act, which applies to wireless, the FCC has and makes use of the power to ensure spectrum uses are serving a broad “public interest.”

In some ways, then, Title III gives the Commission powers to regulate wireless broadband access beyond what they would get from a reclassification to Title II.  So even if the FCC were to choose the first option and leave the current classification scheme alone, wireless broadband providers might still be subject to open Internet rules under Title III.  It would be ironic if the only broadband providers whose network management practices were to be scrutinized were those who needed the most flexibility.  But irony is nothing new in communications law.

One power, however, might elude the FCC, and therefore might give further weight to a scheme that would regulate wireless broadband under Title III and Title II.  Title III does not include the extension of Universal Service to wireless broadband (¶ 103).  This is a particular concern given the increased reliance of under-served and at-risk communities on cellular technologies for all their communications needs.  (See the recent Pew Internet & Society study for details.)

While the NOI asks for comment on whether and to what extent the FCC ought to treat wireless broadband differently and at a later time from wired services, the thrust of this section makes clear the Commission is thinking of more, not less regulation for the struggling cellular industry.

Greed:  Universal Service taxes – So what about Universal Service?  In an effort to justify the Title II reclassification as something more than just a fix to the Comcast case, the FCC has (with some hedging) suggested that D.C. Circuit’s ruling also calls into question the Commission’s ability to implement the National Broadband Plan, published only a few weeks prior to the decision in Comcast

At a conference sponsored by the Stanford Institute for Economic Policy Research that I attended, Chairman Genachowski was emphatic that nothing in Comcast constrained the FCC’s ability to execute the plan.

But in the run-up to the NOI, the rhetoric has changed.  Here the Chairman in his separate statement says only that “the recent court decision did not opine on the initiatives and policies that we have laid out transparently in the National Broadband Plan and elsewhere.”

Still, it’s clear that whether out of genuine concern or just for more political and legal cover, the Commission is trying to make the case that Comcast casts serious doubt on the Plan, and in particular the FCC’s recommendations for reform of the Universal Service Fund (USF).  (¶¶ 32-38).

Though the NOI politely recites the legal theories posed by several analysts for how USF reform could be done without any reclassification, the FCC is skeptical.  For the first and only time in the NOI, the FCC asks not for general comments on its existing authority to reform Universal Service but for the kind of evidence that would be “needed to successfully defend against a legal challenge to implementation of the theory.”

There is, of course, a great deal at stake.  The USF is fed by taxes paid by consumers as part of their telephone bills, and is used to subsidize telephone service to those who cannot otherwise afford it.  Some part of the fund is also used for the “E-Rate” program, which subsidizes Internet access for schools and libraries.

Like other parts of the fund, E-Rate has been the subject of considerable corruption.  As I noted in Law Four of “The Laws of Disruption,” a 2005 Congressional oversight committee labeled the then $2 billion E-Rate program, which had already spawned numerous criminal convictions for fraud, a disgrace, “completely [lacking] tangible measures of either effectiveness or impact.”

Today the USF collects $8 billion annually in consumer taxes, and there’s little doubt that the money is not being spent in a particularly efficient or useful way.  (See, for example, Cecilia Kang’s Washington Post article this week, “AT&T, Verizon get most federal aid for phone service.”)  The FCC is right to call for USF reform in the National Broadband Plan, and to propose repurposing the USF to subsidize basic Internet access as well as dial tone.  The needs for universal Internet access—employment, education, health care, government services, etc.—are obvious.

But what has this to do with Title II reclassification?  There’s no mention in the NOI of plans to extend the class of services or service providers obliged to collect the USF tax, which is to say there’s nothing to suggest a new tax on Internet access.  But Recommendation 8.10 of the NBP encourages just that.  The Plan recommends that Congress “broaden the USF contributions base” by finding some method of taxing broadband Internet customers.  (Congress has so far steadfastly resisted and preempted efforts to introduce any taxes on Internet access at the federal and state level.)

If Congress agreed with the FCC, broadband Internet access would someday be subject to taxes to help fund a reformed USF.  The bigger the category of providers included under Title II (the most likely collectors of such a tax), the bigger the USF.  The temptation to broaden the definition of affected companies from “facilities based” to something, as the California Public Utilities Commission put it, more “flexible,” would be tantalizing.

***

But other than these minor quibbles, the NOI offers nothing to worry about!

Bilski: Justice Stevens’ Last Tilt at the IP Windmills

I dashed off a quick analysis of the Bilski decision for CNET yesterday (see “Supreme Court Hedges on Business Method Patents”), a follow-up to a piece I wrote for The Big Money when the case was argued last fall.  (See “Not with my Digital Economy, You Don’t.”)

The decision was a surprise for me.  I had fully expected the Court to reject outright the experiment in granting patents to paper-and-pencil business methods launched by the Federal Circuit in 1998 with the State Street decision.  Especially since the Federal Circuit itself, in its rejection of Bilski’s application, had all but dismissed State Street as the disaster most businesses—even businesses who have benefited from business method patents–know it to be.

Indeed, as an experiment (in hubris, perhaps), I actually drafted my article over the weekend, even making up quotes I thought might appear in the majority opinion, which I presumed would be written by retiring Justice John Paul Stevens.

Here’s the lede from the piece, which I headlined “Supreme Court Ends Era of Business Method Patents”:

“In a dramatic change in U.S. law, the U.S. Supreme Court today rejected the patenting of business methods, casting doubt on the viability of [XX,XXX] such patents granted by the U.S. Patent Office since 1998.  The sprawling opinion by a divided Court also cast doubts on the long-term viability of patents for most software products.  (The Court’s XXX hundred page opinions are available here [link].)”

Needless to say, I got it wrong, and when the actual decision was released yesterday morning at 11 AM Eastern time, I had to start over.

In the end, the majority opinion was a mere 16 pages.  It basically did nothing to change patent law or to settle enormous and mushrooming uncertainties, both for business methods and, more generally, for software applications.

Justice Kennedy’s opinion explicitly refused to endorse or reject State Street, nor did it foreclose future efforts by the Federal Circuit to find some way to reign in the madness of patents for reserving office bathrooms, exercising cats and, my favorite, for the process of obtaining a patent—madness for which the Federal Circuit itself is fully to blame.

Justice Stevens, joined by Justices Breyer, Sotomayor and Ginsburg, would have gone much farther, as evidenced by his much-longer concurring opinion, which had all indications of having started life as the majority opinion.  Stevens has made no secret of his disdain for the judicial expansion of patent protection over the years.  Had his opinion been the majority I would have had to make very few changes to the earlier version of my article.

Stevens Loses his Majority

So what happened?

I think it’s pretty clear reading all the opinions together that Stevens lost his majority when he and Justice Kennedy couldn’t agree on the breadth of Stevens’ rejection of recent judicial expansions of patentability.  At that point the other Justices who wanted to deny Bilski his patent but didn’t want to go as far as Stevens had a majority.  As the swing vote, Kennedy was asked to write the new majority opinion, such as it is.

With the loss of Kennedy, Stevens lost his last chance to have a big impact on the Court’s intellectual property jurisprudence.  As Timothy B. Lee lovingly details in an Ars Technica article updated yesterday, Stevens had a long history of writing important decisions that protected nascent technology industries from the excesses of patent and copyright maximalists.

Perhaps most important among those cases was Betamax, in which Stevens stretched the doctrine of fair use to hold that Sony was not responsible for widespread unauthorized time-shifting of television programming by users of the VCR devices it sold.  The Betamax fight was a highlight of a battle that is perhaps 100 years old or more between content owners and technology providers.  The VCR, much as every innovation since in digital encoding has done, sent Hollywood into apoplexy.   Echoing ongoing hysteria by content owners over the continued advance of Moore’s Law, the MPAA’s Jack Valenti famously said in 1982 that “the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone.”

Like the Viacom v. YouTube case I wrote about the other day, by the way, one way of looking at the result in Betamax is that it highlights the institutional limits of the judicial branch, particularly in crafting, supervising, and enforcing remedies.  The studios in Betamax, and Viacom today, implicitly or explicitly want the offending technology banned. But with millions of Betamaxes already in American homes by the time the case reached the Supreme Court, how exactly would such a remedy have been operationalized?  How could YouTube, likewise, comply with an a priori rule of no infringing content without simply shutting down?

Where Kennedy Feared to Tread

Of course I don’t have any inside knowledge that Kennedy bolted from Stevens’ Bilski opinion, but I’m pretty sure the truth is something close to that.  My explanation would also explain why it took so long to issue such a short opinion—for Bilski was argued in the fall, and only released on the very last day of the 2010 term.  If Stevens initially had a majority that fell apart, of course, that would have left Kennedy to start later in the game than if he knew all along he was writing for the majority.

There are also some interesting clues in those portions of Justice Kennedy’s decision that Justice Scalia refused to join (those parts only got four votes, so they don’t stand as binding precedent).  It’s been clear since 2006 that Kennedy was one of the Justices skeptical of business method patents.  In his concurrence in eBay Inc. v. MercExchange, L. L. C., 547 U. S. 388, 397 (2006), a case dealing with patent injunctions, Kennedy noted that many patents on business methods are of “suspect validity,” a concern he repeats in Bilski.

But, it turns out, Kennedy’s disdain for business methods doesn’t necessarily apply to the closely-related problem of patents for software.  Had the Supreme Court endorsed the Federal Circuit’s proposed “machine-or-transformation” test, not only would business method patents be out but so too would most if not all patents for software.  Kennedy at least was not willing to go that far.

Let’s back up a bit.  The “machine-or-transformation” test, the basis on which the Federal Circuit rejected Bilski’s application, derives from earlier Supreme Court patent cases (some of them quite old) that attempted to deal with the growing convergence of inventions based on information technology with those of the more traditional variety.  It states that for a process patent to be considered in the first place, it must as a threshold matter describe a process that is either “tied to a particular machine or apparatus,” or one that “transforms a particular article into a different state or thing.”

The “machine” part of the test comes from an early software case, in which the applicant attempted to patent the basic algorithm for translating binary representation into binary arithmetic.  The Supreme Court rejected that claim on the basis that algorithms or “mental steps” were too abstract to be patented, a sensible limit given the potential sweep such patents could have in emerging fields.

The “transformation” part of the test comes from a later case, in which a famous algorithm was translated into software that opened molds when environmental conditions (temperature, pressure) indicated the material inside had properly cured.  Here the patent was allowed, on the basis that the process described effected a transformation not of numbers on a piece of paper but of some actual, constrained physical article.  It was not the algorithm itself that was patented, in other words, but a very specific implementation.

If “machine-or-transformation” were applied as a threshold test for process patents, it’s clear that business methods would be out.  For by definition they are not tied to a particular machine, nor does the execution of their steps affect change a particular article into a different state or thing.  In most cases, the method can be applied mentally or with paper and pencil.  When software is used, it is generally to automate the steps and to allow the method to be executed repeatedly and quickly.

Well, What About Software?

So how would software patents prevail had the Court adopted “machine-or-transformation”?  As I wrote in Law 8 of The Laws of Disruption, most software patents would likely fail the test.  First, most software patents are written for general purpose computers, and so would fail the particular machine test (probably—the meaning of “particular machine” has never really been explored since the 1972 case involving binary translation).

And what about “transformation”?  All software, when executing, transforms a particular article (memory circuits) into a different state (on/off), but it can’t be that every piece of software is therefore eligible for a process patent.  (As “written expression,” all but the simplest programs receive automatic protection under copyright for something close to 100 years.)  Following the mold case, perhaps the Court would say that only software whose execution transforms something other than the computer’s internal circuitry itself would qualify.  But that would limit the class of software eligible for patent protection to almost nothing.

So Kennedy is probably right to say that the “machine-or-transformation” test, if adopted as a threshold requirement for process patents, would “create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals.”

To which many, including me, would say, “Good!”  Given the automatic application of copyright to most software applications, one might ask why software needs patent protection at all.  The long answer is quite long.  The short answer is that it probably doesn’t.

Put another way, the granting of a 20-year monopoly for software puts a drag on the speed with which information technology can be developed and deployed, one that probably isn’t balanced with the additional innovation the availability of that monopoly encourages.  And keeping that balance is the sole rationalization for creating the patent system in the first place.

But Justice Kennedy did not want to go that far, and, it seems, for much the same reason that Justice Stevens did:  to protect emerging information technology industries.  “[T]imes change,” writes Kennedy.  “Technology and other innovation progress in unexpected ways.”  It may be, according to Kennedy, that a simple rule like “machine-or-transformation” would strike the balance between protection and the public domain too far on the side of the latter.  Or maybe not.  “Nothing in this opinion,” he says, “should be read to take a position on where that balance ought to be struck.”

So, says Kennedy, again in a plurality section of his opinion, the Federal Circuit should search for a better way to control the patent tsunami created by State Street.  How?  Here’s a hint, though just barely, as to how such a test ought to be crafted to satisfy Justice Kennedy, if not Justice Roberts, Thomas, and Alito, who joined this paragraph of the opinion:

“[I]f the Court of Appeals were to succeed in defining a narrower category or class of patent applications that claim to instruct how business should be conducted, and then rule that the category is unpatentable because, for instance, it represents an attempt to patent abstract ideas, this conclusion might well be in accord with controlling precedent.”

Translation:  I (we?) am not opposed to threshold tests that exclude business method patents.  I just didn’t like the particular test the Federal Circuit came up with, because it probably leaves out software as well.

Is he Right?

On balance, I’m surprised to find myself agreeing with Justice Kennedy.  The “machine-or-transformation” test had the salutary effect of eliminating business method patents, which, I suspect most of the Justices (certainly a majority) do not believe deserving of patent protection.  It also had the effect, perhaps, of eliminating most software patents.

But Kennedy is right to say that “machine-or-transformation” would at a minimum cast great doubt on the viability of software patents.  For that test, despite being derived from computer-related cases, doesn’t at all take into account the very nature of software.  General purpose computers have revolutionized every aspect of business and life precisely because they are general purpose machines (or, to use the technical term, “virtual machines”).  Through software, computing devices of all shapes and sizes can be transformed into millions of other, specific, machines, often simultaneously.

It’s probably better to say that some software applications do rise to the level of innovation necessary to sustain a patent.  The “machine-or-transformation” test, however, would have given courts little guidance as to how to separate the truly novel and nonobvious (other necessary conditions of patentability) from the mundane.  All software either passes or fails.

For Justice Kennedy, the possibility of over-exclusion was too high.  For Justice Stevens, the possibility of over-inclusion was more dangerous.

Both are eager to create the right environment for continued innovation in information technology.  In the end, they just couldn’t agree on where the risk was greatest.

What would be better?  As Kennedy suggests, a different test that wouldn’t affect software patents would likely survive a future challenge.  That would get rid of business method patents, certainly a good first step.

Then, the courts—or better, Congress—could take a separate and clear-headed look at the software patent problem.

To me, the best solution would be to undo the extension to software of both copyright (by Congress) and patent (by the courts), and to create instead a form of protection that is more limited, constrained, and constructed around the unique and indeed miraculous properties of the virtual machine.  A specific form of protection for “Information Age” inventions.

No sense in describing that protection in any great detail now.  The chances of that solution being implemented, needless to say, are too slim to be visible.

Postscript:  What About Scalia?

One loose end in Bilski is the curious role played by Justice Scalia in the opinions.  As noted, Scalia joined all of Justice Kennedy’s opinion other than the two sections expressing concern about the impact “machine-or-transformation” would have on software, or what Kennedy refers to repeatedly as inventions of “The Information Age.”

There’s no way to know why Scalia declined to join those sections (and, therefore, robbed them of precedential status), but one clue can be found in a second concurrence, this one by Justice Breyer, which Scalia joined in part.

Breyer begins by acknowledging his view that business methods are flat-out unpatentable.  No surprise there—Breyer signed on to Stevens’ opinion, and has previously expressed grave doubts about business method patents in cases where the issue was raised but not decided.

Scalia joins Part II of Breyer’s opinion, which tries to summarize the points on which all nine Justices are, at the end of the day, in agreement.  (All nine, of course, voted to affirm the Federal Circuit’s rejection of Bilski’s application.  The only question had to do with the reasoning for that rejection.)

Breyer returns to the cases from which the Federal Circuit derived the “machine-or-transformation” test, and notes that “transformation is the clue to the patentability of a process claim that does not include particular machines.”  (emphasis in original)

The error of the Federal Circuit, then, was to treat “machine-or-transformation” not as a test, but as “the exclusive test.”  (emphasis in original)  And “machine-or-transformation” is still a far better test, Breyer (with Scalia) goes on, than the much broader statement from State Street (“useful, concrete and tangible result”) that started this whole mess.

Here’s the kicker.  Breyer and Scalia agree that “[t]o the extent that the Federal Circuit’s decision in this case rejected [the State Street] approach, nothing in today’s decision should be taken as disapproving of that determination.”

So, there you have it.  Scalia doesn’t like State Street and doesn’t hate “machine-or-transformation.”  But for some reason apparently not having to do with the impact of that test on the patentability of software, Scalia objected, like Kennedy, to Stevens’ willingness to adopt it as the threshold requirement for process patents.

Does that leave Scalia wanting more protection for software, or less?

Stay tuned!

Viacom v. YouTube: The Principle of Least Cost Avoidance

I’m late to the party, but I wanted to say a few things about the District Court’s decision in the Viacom v. YouTube case this week and.  This will be a four-part post, covering:

1.  The holding

2.  The economic principle behind it

3.  The next steps in the case

4.  A review of the errors in legal analysis and procedure committed by reporters covering the case

I’ve written before (see “Two Smoking Guns and a Cold Case”, “Google v. Everyone” and “The Revolution will be Televised…on YouTube”) about this case, in which Viacom back in 2007 sued YouTube and Google (which owns YouTube) for $1 billion in damages, claiming massive copyright infringement of Viacom content posted by YouTube users.

There’s no question of the infringing activity or its scale.  The only question in the case is whether YouTube, as the provider of a platform for uploading and hosting video content, shares any of the liability of those among its users who uploaded Viacom content (including clips from Comedy Central and other television programming) without permission.

The more interesting questions raised by the ascent of new video sites aren’t addressed in the opinion.  Whether the users understood copyright law or not and whether their intent in uploading their favorite clips from Viacom programming was to promote Viacom rather than to harm it, were not considered.   Indeed, whether on balance Viacom was helped more than harmed by the illegal activity, and how either should be calculated under current copyright law, is not relevant to this decision, and are saved for another day and perhaps another case.

That’s because Google moved for summary judgment on the basis of the Digital Millennium Copyright Act’s “safe harbor” provisions, which immunize service providers from any kind of attributed or “secondary” liability for user behavior when certain conditions are met.  Most important, a service provider can dock safe from liability only if it can show that it :

– did not have “actual knowledge that the material…is infringing,” or is “not aware of facts or circumstances from which infringing activity is apparent” and

– upon obtaining such knowledge or awareness “acts expeditiously to remove…the material” and

– does not “receive a financial benefit directly attributable to the infringing activity, “in a case in which the service provider has the right ability to control such activity,” and

– upon notification of the claimed infringement, “responds expeditiously to remove…the material that is claimed to be infringing….”

Note that all four of these elements must be satisfied to benefit from the safe harbor

The question for Judge Stanton to decide on YouTube’s motion for summary judgment was whether YouTube met all the conditions, and he has ruled that they did so.

1.  The Slam-Dunk for Google

The decision largely comes down to an interpretation of what phrases like “the material” and “such activity” means in the above-quoted sections of the DMCA.

Indeed, the entire opinion can be boiled down to one sentence on page 15.  After reviewing the legislative history of the DMCA at length, Judge Stanton concludes that the “tenor” of the safe harbor provisions leads him to interpret infringing “material” and “activity” to mean “specific and identifiable infringements of particular individual items.”

General knowledge, which YouTube certainly had, that some of its users were (and still are) uploading content protected by copyright law without permission, is not enough to defeat the safe harbor and move the case to a determination of whether or not secondary liability can be shown.  “Mere knowledge of prevalence of such activity in general,” Judge Stanton writes, “is not enough.”

To defeat a safe harbor defense at the summary judgment stage, in other words, a content owner must show that the service provider knew or should have known about specific instances of infringement.  Such knowledge could come from a service provider hosting subsites with names like “Pirated Content” or other “red flags.”  But in most cases, as here, the service provider would not be held to know about specific instances of infringement until informed of them, most often from takedown notices sent by copyright holders themselves.

Whether ad revenue constitutes “direct financial benefit” was not tested, because, again, that provision only applies to “activity” the service provider has the right to control.  “Activity,” as Judge Stanton reads it, also refers to specific instances of illegal content distribution.

As Judge Stanton notes, YouTube users currently post 24 hours of video content every minute, making it difficult if not impossible, as a practical matter, for YouTube to have any idea which ones are not authorized by rights holders.  And when Viacom informed the site of some 100,000 potentially-infringing clips, YouTube removed nearly all of them within a day.  That is how the DMCA was intended to work, according to Judge Stanton, and indeed demonstrates that it is working just fine.

Viacom, of course, is free to pursue the individuals who posted its content without permission, but everyone should know by now that for many reasons that’s a losing strategy.

2.  The Least-Cost Avoider Principle

On balance, Judge Stanton is reading what is clearly an ambiguous statute with a great deal of common sense.  To what extent the drafters of the DMCA intended the safe harbor to apply to general vs. specific knowledge is certainly not clear from the plain language, nor, really, from the legislative history.  (Some members of the U.S. Supreme Court believe strongly that legislative history, in any case, is irrelevant in interpreting a statute, even if ambiguous.)

To bolster his interpretation that the safe harbor protects all but specific knowledge of infringement, interestingly, Judge Stanton points out that this case is similar to one decided a few months ago in the Second Circuit.  In that case, the court refused to apply vicarious liability for trademark infringement to eBay for customer listings of fake Tiffany’s products.

Though trademark and copyright law are quite different, the analogy is sensible.  In both cases, the question comes down to one of economic efficiency.  Which party, that is, is in the best position to police the rights being violated?

Here’s how the economic analysis might go.  Given the existence of new online marketplaces and video sharing services, and given the likelihood and ease with which individuals can use those services to violate information rights (intentionally or otherwise, for profit or not), the question for legislators and courts is how to minimize the damage to the information rights of some while still preserving the new value to information in general that such services create.

For there is also no doubt that the vast majority of eBay listings and YouTube clips are posted without infringing the rights of any third party, and that the value of such services, though perhaps not easily quantifiable, is immense.  EBay has created liquidity in markets that were too small and too disjointed to work efficiently offline.  YouTube has enabled a new generation of users with increasingly low-cost video production tools to distribute their creations, get valuable feedback and, increasingly, make money.

That these sites (and others, including Craigslist) are often Trojan Horses for illegal activities could lead legislators to ban them outright, but that clearly gets the cost-benefit equation wrong.  A ban would generate too much protection.

At the same time, throwing up one’s hands and saying that a certain class of rights-holders must accept all the costs of damage without any means of reducing or eliminating those costs, would be overly generous in the other direction.  Neither users, service providers, nor rights holders would have any incentives to police user behavior.  The basic goals of copyright and trademark might be seriously damaged as a result.

The goal of good legislation in situations like this—where overall benefit outweighs individual harm and where technology is changing the equation rapidly–is to produce rules that are most likely to get the balance right and do so with the least amount of expensive litigation.  The DMCA provisions described above are one attempt at creating such rules.

But those rules, given the uncertainties of emerging technologies and the changing behaviors of users, can’t possibly give judges the tools to decide every case with precision.  Such rules must be a least a little ambiguous (if not a lot).  Judges, as they have done for centuries, must apply other, objective interpretive tools to help decide individual cases even as the targets keep moving.

Judge Stanton’s interpretation of the safe harbor provisions follows, albeit implicitly, one of those neutral tools, the same one applied by the Second Circuit in the eBay case.  And that is the principle of the least-cost avoider.

This principle encourages judges to interpret the law, where possible, such that the burden of reducing harmful behavior falls to the party in the best position, economically, to avoid it.  That way, as parties in similar situations in the future evaluate the risk of liability, they will be more likely to choose a priori behaviors that not only reduce the risk of damages but also the cost of more litigation.

In the future, if Judge Stanton’s ruling stands, rights holders will be encouraged to police video sites more carefully.  Service providers such as YouTube will be encouraged to respond quickly to legitimate demands to remove infringing content.

Given the fact that activities harmful to rights holders are certain to occur, in other words, the least cost avoider principles says that a judge should rule in a way that puts the burden of minimizing the damage on the party who can most efficiently avoid it.  In this case, the choice would be between YouTube (preview all content before posting and ensure legal rights have been cleared), Viacom (monitor sites carefully and quickly demand takedown of infringing content) or the users themselves (don’t post unauthorized content without expecting to pay damages or possible criminal sanctions).

Here, the right answer economically is Viacom, the rights holder who is directly harmed by the infringing behavior.

That may seem unfair from a moral standpoint.  For, after all, Viacom is the direct victim of the users’ clearly unlawful behavior and the failure of YouTube, the enabler of the users, to stop it.  Why should the victim be held responsible for making sure they are not caused further damage in the future?

But there’s a certain economic logic to that decision, though one difficult to quantify (Judge Stanton made no effort to do so; indeed he did not invoke the least cost avoider principle explicitly.)  The grant of a copyright or a trademark is the grant of a monopoly on a certain class of information, a grant that itself comes with inherent economic inefficiencies in the service of encouraging overall social value–encouraging investment in creative works.

Part of the cost of having such a valuable monopoly is the cost of policing it, even in new media and new services that the rights holder may not have any particular interest in using itself.

By interpreting the DMCA as protecting service providers from mere general knowledge of infringing behavior, Judge Stanton has signaled that Viacom can police YouTube more efficiently than YouTube can.  Why?  For one thing, Viacom has the stronger incentive to ensure unauthorized content stays off the site.  It alone also has the knowledge both of what content it has rights to and when that content appears without authorization.  (Several examples arose in the course of discovery of content Viacom ordered YouTube to remove that, it turned out, had been posted by Viacom or its agents masquerading as users in order to build buzz.)

The cost of monitoring and stopping unauthorized posting is not negligible, of course.  But YouTube, eBay and other service providers increasingly provide tools to make the process easier, faster, and cheaper for rights holders.  They may or may not be obligated to do so as a matter of law; for now, their decision to do so represents an organic and efficient form of extra-legal rulemaking that Judge Stanton is eager to encourage.

No matter what, someone has to bear the bulk of the cost of monitoring and reporting violations.  Viacom can do it cheaper, and can more easily build that cost into the price it charges for authorized copies of its content.

And where it cannot easily issue takedown orders to large, highly-visible service providers like YouTube, it retains the option, admittedly very expensive, to sue the individuals who actually infringed.  It can also try to invoke the criminal aspect of copyright law, and get the FBI (that is, the taxpayer) to absorb the cost.

To rule the other way–to deny YouTube its safe harbor–would encourage service providers to overspend on deterrence of infringing behavior.  In response, perhaps YouTube and other sites would require, before posting videos, that users provide legally-binding and notarized documentation that the user either owns the video or has a license to post it.  Obtaining such agreements, not to mention evaluating them for accuracy, would effectively mean the end of video sites.  Denying the safe harbor based on general knowledge, to put it another way, would effectively interpret the DMCA as a ban on video sites.

That would be cheaper for Viacom, of course, but would lead to overall social loss.  Right and wrong, innocence and guilt, are largely excluded from this kind of analysis, though certainly not from the rhetoric of the parties.  And remember that actual knowledge or general awareness of specific acts of infringement would, according to Judge Stanton’s rule, defeat the safe harbor.  In that case, to return to the economic terminology, the cost of damages—or, if you prefer, assigning some of the blame—would shift back on YouTube.

3.  What’s Next?

Did Judge Stanton get it right as a matter of information economics?  It appears that the answer is yes.  But did he get it right as a matter of law—in this case, of the DMCA?

That remains to be seen.

Whether one likes the results or not, as I’ve written before, summary judgment rulings by district courts are never the last word in complex litigation between large, well-funded parties.  That is especially so here, where the lower court’s interpretation of a federal law is largely untested in the circuit and indeed, as here, in any circuit.

Judge Stanton cites as authority for his view of the DMCA a number of other lower court cases, many of them in the Ninth Circuit.  But as a matter of federal appellate law, Ninth Circuit cases are not binding precedent on the Second Circuit, where Judge Stanton sits.  And other district (that is, lower) court opinions cannot be cited by the parties as precedent even within a circuit.  They are merely advisory.  (A Ninth Circuit case involving Veoh is currently on appeal; the service provider won on a “safe harbor” argument similar to Google’s in the lower court.)

So this case will certainly head for appeal to the Second Circuit, and perhaps from there to the U.S. Supreme Court.  But a Supreme Court review of the case is far from certain.  Appeals to the circuit court are the right of the losing party.  A petition to the Supreme Court, on the other hand, is accepted at the Court’s discretion, and the Court turns down the vast majority of cases that it is asked to hear, often without regard to the economic importance or newsworthiness of the case.  (The Court refused to hear an appeal in the Microsoft antitrust case, for example, because the lower courts largely applied existing antitrust precedents.)

A circuit court reviewing summary judgment will make a fresh inquiry into the law, accepting the facts alleged by Viacom (the losing party below) as if they were all proven.  If the Second Circuit follows Judge Stanton’s analogy to the eBay case, Google is likely to prevail.

If the appellate court rejects Judge Stanton’s view of specificity, the case will return to the lower court and move on, perhaps to more summary judgment attempts by both parties and, failing that, a trial.  More likely, at that point, the parties will reach a settlement, or an overall licensing agreement, which may have been the point of bringing this litigation in the first place.  (A win for Viacom, as in most patent cases, would have given the company better negotiating leverage.)

4.  Getting it Right or Wrong in the Press

That brief review of federal appellate practice is entirely standard—it has nothing to do with the facts of this case, the parties, the importance of the decision, or the federal law in question.

Which makes it all the more surprising when journalists who regularly cover the legal news of particular companies continually get it wrong when describing what has happened and/or what happens next.

Last and perhaps least, here are a few examples from some of the best-read sources:

The New York Times – Miguel Helft, who covers Google on a regular basis, commits some legal hyperbole in saying that Judge Stanton “threw out” Viacom’s case, and that “the ruling” (that is, this opinion) could have “major implications for …scores of Internet sites.”  The appellate court decision will be the important one, but technically it will apply only to cases brought in the Second Circuit.  The lower court’s decision, even if upheld, will have no implications for future litigation.  Helft also quotes from counsel at both Viacom and Google which are filled with legal errors, though perhaps understandably so.

The Wall Street Journal –Sam Schechner and Jessica E. Vasellaro make no mistakes in their report of the decision.  They correctly explain what summary judgment means, and summarize the ruling without distorting it.  Full marks.

The Washington Post – Cecilia Kang, who covers technology policy for the Post, incorrectly characterizes Judge Stanton’s ruling as a “dismissal” of Viacom’s lawsuit.  A dismissal, as opposed to the granting of a motion for summary judgment, generally happens earlier in litigation, and signals a much weaker case, often one for which the court finds it has no jurisdiction or which, even if all the alleged facts are true, doesn’t amount to behavior for which a legal remedy exists.  Kang repeats the companies’ statements, but also adds a helpful quote from Public Knowledge’s Sherwin Siy about the balance of avoiding harms.

The National Journal – At the website of this legal news publication, Juliana Gruenwald commits no fouls in this short piece, with an even better quote from PK’s Siy.

CNET News.com – Tech news site CNET’s media reporter Greg Sandoval suggests that “While the case could continue to drag on in the appeals process, the summary judgment handed down in the Southern District of New York is a major victory for Google . . . .”  This is odd wording, as the case will certainly “drag on” to an appeal to the Second Circuit.  (A decision by the Second Circuit is perhaps a year or more away.)  Again, a district court decision, no matter how strongly worded, does not constitute a “major victory” for the prevailing party.

Sandoval (who, it must be said, posted his story quite quickly), also exaggerates the sweep of Google’s argument and the judge’s holding.  He writes, “Google held that the DMCA’s safe harbor provision protected it and other Internet service providers from being held responsible for copyright infringements committed by users.  The judge agreed.”  But Google argued only that it (not other providers) was protected, and protected only from user infringements it didn’t know about specifically.  That is the argument with which Judge Stanton agreed

Perhaps these are minor infractions.  You be the judge.

Updates to the "Media" Page

I’ve added almost twenty new posts to the Media Page from April and May. These were busy months for those interested in the dangerous intersection of technology and policy, the theme of The Laws of Disruption.

A major court decision upended the Federal Communications Commissions efforts to pass new net neutrality regulations, leading the Commission to begin execution of its “nuclear option”–the reclassification of Internet access under ancient rules written for the old telephone monopoly.  While I support the principles of net neutrality, I am increasingly concerned about efforts by the FCC to appoint itself the “smart cop” on the Internet beat, as Chairman Julius Genachowski put it last fall.

As consumer computing outstripped business computing for the first time, privacy has emerged as a leading concern of both users and mainstream media sources.  Not surprisingly, legal developments in information security go hand-in-hand with conversations about privacy policy and regulation, and I have been speaking and commenting to the press extensively on these topics.

The new entries run the full range of topics, including copyright, identity theft, e-commerce, new criminal laws for social networking behaviors, as well as privacy, security, and communications policy.

In the last few months, I have continued writing not only for this blog but for the Technology Liberation Front, the Stanford Law School Center for Internet & Society, and for CNET.  I’ve also written op-eds for The Orange County Register, The Des Moines Register, and Info Tech & Telecom News.

I’ve appeared on CNN, Fox News, and National Public Radio, and have been interviewed by print media sources as varied as El Pais, The Christian Science Monitor, TechCrunch and Techdirt.

My work has also been quoted by a variety of business and mainstream publications, including The Atlantic, Reason, Fortune and Fast Company.

As they say, may you live in interesting times!

Google v. Everyone

I had a long interview this morning with the Christian Science Monitor .  Like many of the interviews I’ve had this year, the subject was Google.    At the increasingly congested intersection of technology and the law, Google seems to be involved in most of the accidents.

Just to name a few of the more recent pileups, consider the Google books deal, net neutrality and the National Broadband Plan, Viacom’s lawsuit against YouTube for copyright infringement, Google’s very public battle with the nation of China, today’s ruling from the European Court of Justice regarding trademarks, adwords, and counterfeit goods, the convictions of Google executives in Italy over a user-posted video, and the reaction of privacy advocates to the less-than-immaculate conception of Buzz.

In some ways, it should come as no surprise to Google’s legal counsel that the company is involved in increasingly serious matters of regulation and litigation.  After all, Google’s corporate goal is the collection, analysis, and distribution of as much of the world’s information as possible, or, as the company puts it,” to organize the world’s information and make it universally accessible and useful.”  That’s a goal it has been wildly successful at in its brief history, whether you measure success by use (91 million searches a day) or market capitalization ($174 billion).

As the world’s economy moves from one based on physical goods to one driven by information flow, the mismatch between industrial law and information behavior has become acute, and Google finds itself a frequent proxy in the conflicts.

As I argue in “The Laws of Disruption”, the unusual economic properties of information make it a poor fit for a body of law that’s based on industrial-era assumptions about physical property.  That’s not to say there couldn’t be an effective law of information, only that the law of physical property isn’t it.  Particularly not when industrial law assumes that the subject of any conflict or effort to control (the res as they say in legal lingo) is visible, tangible, and unlikely to cross too many local, state, or national borders—and certainly not every border at the same time, all the time.

To see the mismatch in action, consider two of Google’s on-going conflicts, both in the news this week:  Google v. China and Google v. Viacom.

Google v. China

In 2006, Google made a Faustian bargain with the Chinese government.  In exchange for permission to operate inside the country, Google agreed to substantially self-censor search results for topics (politics, pornography, religion) that the Chinese government considered dangerous.  The company had strong financial motivations for gaining a foothold in the astronomically-fast expanding Chinese Internet market, of course, but also had a genuine belief that giving Chinese users access to the vast majority of its indexed information had the potential to encourage fewer restrictions over time.

Apparently the result was the opposite, with the government tightening, rather than loosening the reins.  Google’s discomfort was compounded by the revelation in January that widespread hacking and phishing scams had penetrated the Gmail accounts of several Chinese dissidents, leading the company to announce it would soon end its censorship of Chinese searches.  (It also added encryption technology to Gmail and, it is widely believed, began working closely with the National Security Agency to help identify the sources of the attacks.)  Though Google has not claimed the attacks were the work of the Chinese government or entities under its control, the connection was hard to miss.  Google is hacked, Google decides to end cooperation with the government.

This week, the company made good on its promise by closing its search site in China and rerouting searches from there to its site in Hong Kong.  As a result of the long occupation of Hong Kong by western governments, which ended in 1997 when the U.K.’s “lease” expired, Hong Kong maintains special legal status within China.  Searches originating in Hong Kong are not censored, and Hong Kong appears to be largely outside China’s “great firewall” which blocks undesirable information including YouTube and Twitter.

For residents of the mainland, however, the move is a non-event.  China quickly applied the filters that Google had applied on behalf of the government for searches originating inside the country.  So Google searches in China are still censored—only now Google isn’t doing the censoring.  The damage to the company’s relationship with the Chinese government, meanwhile, has been severe, as has collateral damage to the relationship between China and the U.S. government.  The story is by no means over.

Google v. Viacom

Also in the last week, a number of key documents were released by the court that is hearing Viacom’s long-running copyright infringement case against Google’s YouTube.  The case, which began around the same time that Google made its deal with China, seeks $1 billion in damages from copyright violations against Viacom content perpetrated by YouTube users, who posted everything from short clips to music videos to entire programs, including “South Park” and “The Daily Show.”

Under U.S. law, Internet service providers are not liable for copyright infringement perpetrated by their users, provided the service provider is not aware of the infringement and that they respond “expeditiously” to takedown requests sent to them by the copyright holder.   (See Section 512 of the Digital Millennium Copyright Act,  http://www.copyright.gov/legislation/dmca.pdf)  Viacom claims YouTube is not entitled to immunity in that it had actual knowledge of the infringing activities of its users.

Discovery in the case has revealed some warm if not smoking guns—guns that the parties resisted being made public.  (See the New York Times Miguel Helft’s as-always excellent coverage, and also coverage in the Wall Street Journal.)  Viacom claims it has found a number of internal YouTube emails that make clear the company knew of widespread copyright infringement by its users, though Google characterizes those messages as having been taken out of context.

Perhaps more interesting has been the embarrassing revelation that many (though still a minority) of the Viacom clips, from MTV and Comedy Central programming for example, were posted by Viacom itself.  Indeed, these noninfringing posts were often put on YouTube under the guise of being posted by non-affiliated users in the hopes of giving the clips more credibility!

These “fake grassroots” accounts, as Viacom marketing executives referred to them, made use of as many as 18 outside marketing agencies.  Most embarrassing is that Viacom’s own legal team has now admitted that hundreds of the YouTube postings it initially claimed in its list of infringing posts were actually authorized posting by Viacom or its affiliates, disguised to look like unauthorized postings.

(Since 2007, Google has somewhat quieted the concerns of copyright holders over YouTube by introducing filtering technologies that let copyright holders supply reference files that can be digitally compared to weed out infringing copies.  This is an example, for better and for worse, of what Larry Lessig has in mind when he talks of implementing legal rules through software “code.”  Better because it avoids some litigation, worse because the code may be overprotective—filtering out uses that might in fact be legal under “fair use.”)

Google v. Everyone

What do the two examples have in common?  Both highlight the difficulty of judging the use of information with traditional legal tools of property and borders.

In the first example, China considers some forms of information to be dangerous.  To some extent, in fact, all governments restrict the flow of information in the name of national security, consumer safety, or other government aims.  China (along with Burma and Iran) are at one end of the control spectrum, while the U.S. and Europe are at the other end.

Google believes, as do many information economists, that more information is always better than less, even when some of it is of poor quality, outright wrong or which espouses dangerous viewpoints.  Google’s view was perhaps best put by Oliver Wendell Holmes, Jr. in his 1919 dissent in Abrams v. United States, 250 U.S. 616 (1919):

[T]he ultimate good desired is better reached by free trade in ideas…[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.

But even legal systems that believe in the “the marketplace of ideas” as the preferred forum for determining information value can’t resist the temptation sometimes to put their finger on the scales.  Congress and some states have tried and failed repeatedly to censor “indecent” content on the Internet (fortunately, the First Amendment puts a stop to it, but, as John Perry Barlow says, in cyberspace the First Amendment is a local ordinance).  Just this week, Australia came under fire for proposals to beef up the requirements it places on Internet service providers to censor material deemed harmful to children.  The Google convictions in Italy last month suggest that not even Europe is fully prepared to let the marketplace of ideas operate without the worst kind of ex post facto oversight.

Likewise in the Viacom litigation, it’s clear regardless of the final determination of legal arguments that some information uses that are illegal are nonetheless valuable to those whose interests are supposedly being protected by the law.  Viacom can make all the noise it wants to about “pirates” “stealing” their “intellectual property,” as if this were the 1800’s and the Barbary Coast.  .  Those who posted copyrighted material to YouTube were not doing it with the intent of harming Viacom—their intent was just the opposite.  What’s really going on is that users—fans!—who value their programming were using YouTube to share and spread their enthusiasm with others.

Yet intent plays no part in copyright infringement.  The law assumes that, as with physical property, any use that is not authorized by the “owner” of the information is with few exceptions likely to be financial detrimental.  That is certainly what Viacom claims in the litigation.   But the company’s own behavior tells a different story.  Why else would they post their own material, and pretend to be regular users?  Put another way, why is information posted by an anonymous fan more valuable to Viacom than information posted by the company itself?  What is it about an unsanctioned sharing that communicates valuable information to the recipient?

By posting the clips, YouTube users added their own implicit and explicit endorsement to the content.  The fact that Viacom marketing executives pretended to be fans themselves demonstrates the principle that the more information is used, the more valuable it can become.  That’s not always the case, of course, but here the sharing clearly adds value—in fact, it adds new information to the content (the endorsement) that benefits Viacom.

Whether that added value is outweighed by lost revenue to Viacom from users who, having seen the content on YouTube, didn’t watch it (or the commercials that fund it) on an authorized channel ought to be a key consideration in the court’s determination, but in fact it has almost no place in the law of copyright.  Yet Viacom obviously saw that value itself, or it wouldn’t have posted its own clips pretending to be fans of the programming.

Productive v. Destructive Use

Both these cases highlight why traditional property ideas don’t fit well with information uses.  What would work better?  I present what I think is a more useful framework in the book, a view that is so far absent from the law of information.  That framework would analyze information uses not under archaic laws of property but would rather weigh the use as being “productive” or “destructive” or both and determine if, on the whole, the net social value created by the use is positive.  If so, it should not be treated as illegal, regardless of the law.

What do I mean?  Since information can be used simultaneously by everyone and, after use, is still intact if not enhanced by the use, it’s really unhelpful to think about information being “stolen” or, in the censorship context, of being “dangerous.”   Rather, the law should evaluate whether a use adds more value to information than it takes away.  Information use that adds value (reviewing a movie) is productive and should be legal.  A use that only takes value away (for example, identity theft and other forms of Internet fraud) is destructive and should be illegal.  Uses that do both (copyright infringement in the service of promoting the underlying content) should be allowed if the net effect is positive.

Under the productive/destructive model, Google’s actions in entering and now exiting from China make more sense as both policy and business decisions.  Censoring information is destructive in that it gives users the appearance of complete access where in fact the access has been limited.  That harm should be weighed against the benefit of providing information that otherwise wouldn’t have been available at all to Chinese users.

That the government became more rather than less concerned about Google over time might imply that Google had gotten the balance right—that is, that the Chinese government was increasingly aware that even what it originally thought of as benign information could have the kind of transformative effects it wanted to avoid.

Is China wrong to censor “dangerous” information?  Economically, the answer is yes.  There is a strong correlation between countries who are on the “freer” end of the censorship spectrum and those that have gained most financially from the spread of information technology.  The more information there is, the more value gets added by its use, value that is allocated (roughly, sometimes poorly) among those who added the value.

Likewise, the posting of Viacom clips on YouTube should weigh the productive value of information sharing (promotion and endorsement) against the destructive aspects–lost revenue of paid viewers on an authorized channel supported by cable fees, advertising sponsorship, and purchased copies in whatever media.

Under that kind of analysis, it might turn out that Viacom lost little and gained a great deal from the unpaid services of its fans, and that in fact any true accounting would have credited the fans for $1 billion in generated value rather than the other way around.  Or maybe it was a wash.  But just to consider the lost revenue, particularly using the crazy method of modern copyright law (each viewed clip is considered as a lost sale), is certain to misjudge the true extent of the harm, if any.

A lingering problem in both these examples is the difficulty of determining both the quality and quantity of the productive and destructive uses of the information in question.   How much harm did Google censoring cause?  How much value did YouTube users generate?

We don’t know, not because the answers aren’t knowable but because the tools for making such determinations are so far very primitive.  Traditional rules of accounting follow the industrial assumptions of physical property—that is, if I have it then you don’t, and once I’ve used it, it’s gone or at least greatly depleted—that information doesn’t follow.   It makes little or no allowance for the fact that information use can be non-diminishing, or even productive.

So how would we measure the harm to Chinese Internet users from the censored information, or the value of the information they could get before Google left town?  How would we measure the value of “viral” marketing of Viacom programming posted by real (as opposed to “fake grassroots”) fans?  How would we measure the actual losses Viacom suffered—not the statutory damages they claim under copyright law, which are surely far too generous?

Well, one problem at a time.  First let’s change the rhetoric about information use, positive and negative, from the language of property to a language that’s better-suited to a global, network economy.  If we do, the metrics will invent themselves.