Category Archives: Digital Life

Meditations in a Privacy Emergency

Emotions ran high at this week’s Privacy Identity and Innovation conference in Seattle.  They usually do when the topic of privacy and technology is raised, and to me that was the real take-away from the event.

As expected, the organizers did an excellent job providing attendees with provocative panels, presentations and keynotes talks—in particular an excellent presentation from my former UC Berkeley colleague Marc Davis, who has just joined Microsoft.

There were smart ideas from several entrepreneurs working on privacy-related startups, and deep thinking from academics, lawyers and policy analysts.

There were deep dives into new products from Intel, European history and the metaphysics of identity.

But what interested me most was just how emotional everyone gets at the mere  mention of private information, or what is known in the legal trade as  “personally-identifiable” information.  People get enervated just thinking about how it is being generated, collected, distributed and monetized as part of the evolution of digital life.  And pointing out that someone is having an emotional reaction often generates one that is even more primal.

Privacy, like the related problems of copyright, security, and net neutrality, is often seen as a binary issue.  Either you believe governments and corporations are evil entities determined to strip citizens and consumers of all human dignity or you think, as leading tech CEOs have the unfortunate habit of repeating, that privacy is long gone, get over it.

But many of the individual problems that come up are much more subtle that that.  Think of Google Street View, which has generated investigations and litigation around the world, particularly in Germany where, as Jeff Jarvis pointed out, Germans think nothing of naked co-ed saunas.

Or how about targeted or personalized or, depending on your conclusion about it, “behavioral” advertising?  Without it, whether on broadcast TV or the web, we don’t get great free content.  And besides, the more targeted advertising is, the less we have to look at ads for stuff we aren’t the least bit interested in and the more likely that an ad isn’t just an annoyance but is actually helpful.

On the other hand, ads that suggest products and services I might specifically be interested in are “creepy.”  (I find them creepy, but I expect I’ll get used to it, especially when they work.)

And what about governments?  Governments shouldn’t be spying on their citizens, but at the same time we’re furious when bad guys aren’t immediately caught using every ounce of surveillance technology in the arsenal.

Search engines, mobile phone carriers and others are berated for retaining data (most of it not even linked to individuals, or at least not directly) and at the same time are required to retain it for law enforcement purposes.  The only difference is the proposed use of the information (spying vs. public safety), which can only be known after data collection.

As comments from Jeff Jarvis and Andrew Keen in particular got the audience riled up, I found myself having an increasingly familiar but strange response.  The more contentious and emotional the discussion became, the more I found myself agreeing with everything everyone was saying, including those who appeared to be violently disagreeing.

We should divulge absolutely everything about ourselves!  No one should have any information about us without our permission, which governments should oversee because we’re too stupid to know when not to give it!  We need regulators to protect us from corporations; we need civil rights to protect us from regulators.

Logical Systems and Non-Rational Responses

I can think of at least two important explanations for this paradox.  The first is a mismatch of thought systems.  Conferences, panel discussions, essays and regulation are all premised on rational thinking, logic, and reason.  But the more the subject of these conversations turns to information that describes our behavior, our thoughts, and our preferences, the more the natural response is not rational but emotional.

Try having a logical conversation with an infant—or a dog, or a significant other who is upset–about its immediate needs.  Try convincing someone that their religion is wrong.  Try reasoning your way out of or into a sexual preference.  It just doesn’t work.

Which raises at least one interesting problem.  Privacy is not only an emotional subject, it’s also increasingly a profitable one.  According to a recent Wall Street Journal article, venture capitalists are now pouring millions into privacy-related startups.  Intel just offered $8 billion for security service provider McAfee.  Every time Facebook blinks, the blogosphere lights up.

So the mismatch of thought systems will lead to more, not fewer, collisions all the time.

Given that, how does a company develop a strategic plan in the face of unpredictable and emotional response from potential users, the media, and regulators?  Strategic planning, to the extent anyone really does it seriously, is based on cold, hard facts—as far from emotion as its practitioners can possibly get.  The patron saint of management science, after all, is Frederick Winslow Taylor who, among other things, invented time-and-motion studies to achieve maximum efficiency of human “machines.”

But the rational vehicle of planning simply crumples against the brick wall of emotion.

As I wrote in an early chapter of “The Laws of Disruption,” for example, companies experimenting with early prototypes of radio frequency ID tags (still not ready for mass deployment ten years later) could never have predicted the violent protests that accompanied tests of the tags in warehouses and factories.

Much of that protest was led by a woman who believes that RFID tags are literally the technology prophesied by the Book of Revelations as the sign of the Antichrist.  Assuming one is not an agent of the devil, or in any case isn’t aware that one is, how do you plan for that response?

The more that intimacy becomes a feature of products and services, including products and services aimed at managing intimate information, the more the logical religion of management science will need to incorporate non-rational approaches to management, scenario planning and economics.

It won’t be easy—the science of management science isn’t very scientific in the first place and, as I just said, changing someone’s religion doesn’t happen through rational arguments—the kind I’m making right now.

The Bankruptcy of the Property Metaphor for Information

The second problem that kept hitting me over the head during PII 2010 was one of linguistics.  Which is:  the language everyone uses to talk about (or around) privacy.  We speak of ownership, stealing, tracking, hijacking, and controlling.  This is the language of personal property, and it’s an even worse fit for the privacy conversation than is the mental discipline of logic.

In discussions about information of any kind, including creative works as well as privacy and security, the prevailing metaphor is to talk about information as a kind of possession.  What kind?  That’s part of the problem.  Given the youth of digital life and the early evolution of our information economy, most of us really only understand one kind of property, and that is where our minds inevitably and often unintentionally go.

We think of property as the moveable, tangible variety—cattle, collectibles, commodities–that in legal terminology goes by the name “chattels.”

Only now has that metaphor become a serious obstacle.  While there has been a market for information for centuries, the revolutionary feature of digital life is that is has, for the first time in human history, separated information from the physical containers in which it has traditionally been encapsulated, packaged, transported, retailed, and consumed.

A book is not the ideas in the book, but a book can be bought, sold, controlled, and destroyed.  A computer tape containing credit card transactions is not the decision-making process of the buyers and sellers of those transactions, but a tape can be lost, stolen, or sold.

When information could only be used by first reducing it to physical artifacts, the property metaphor more-or-less worked.  Control the means of production, and you controlled the flow of information.  When Gutenberg perfected movable type, the first thing he published was the Bible—but in German, not Latin.  Hand-made manuscripts and a dead language gave the medieval Catholic Church a monopoly on the mystical.  Turn the means of production over to the people and you have the Protestant Reformation and the beginning of censorship–a legal control on information.

The digital revolution makes the liberation of information all the more potent.  Yet in all conversations about information value, most of us move seamlessly and dangerously between the medium—the artifact—and the message—the information.

But now that information can be used in a variety of productive and destructive ways without ever taking a tangible form, the property metaphor has become bankrupt.  Information is not property the way a barrel of oil is property.   The barrel of oil can only be possessed by one person at a time.  It can be converted, but only once, to lubricants, gasoline, or remain in crude form.  Once the oil is burned, the property is gone.  In the meantime, the barrel of oil can be stolen, tracked, and moved from one jurisdiction to another.

Digital information isn’t like that.  Everyone can use it at the same time.  It exists everywhere and nowhere.  Once it’s used, it’s still there, and often more valuable for having been used.  It can be remixed, modified, and adapted in ways that create new uses, even as the original information remains intact and usable in the original form.

Tangible property obeys the law of supply and demand, as does information forced into tangible containers.  But information set free from the mortal coil obeys only the law of networks, where value is a function of use and not of scarcity.

But once the privacy conversation (as well as the copyright conversation) enters the realm of the property metaphor, the cognitive dissonance of thinking everyone is right (or wrong) begins.  Are users of copyrighted content “pirates”?  Or are copyright holders “hoarders”?  Yes.

(“Intellectual property,” as I’ve come to accept, is an oxymoron.  That’s hard for an IP lawyer to admit!)

It’s true that there are other kinds of property that might better fit our emerging information markets.  Real estate (land) is tangible but immovable.   Use rights (e.g., a ticket to a movie theater, the right to drill under someone’s land or to block their view) are also long established.

But both the legal framework and the economic theory describing these kinds of property are underdeveloped at the very least.  Convincing everyone to shift their property paradigm would be hard when the new location is so barren.

Here are a few examples of the problem from the conference.  What term would make consumers most comfortable with a product that helps them protect their privacy, one speaker asked the audience.  Do we prefer “bank,” “vault,” “dossier,” “account” etc.?

“Shouldn’t consumers own their own information?” an attendee asked, a double misuse of the word “own.”   Do you mean the media on which information may be stored or transferred, or do you mean the inherent value of the bits (which is nothing)?  In what sense is information that describes characteristics or behaviors of an individual that person’s “own” information?

And what does it mean to “own” that information?  Does ownership bring with it the related concepts of being bought, sold, transferred, shared, waived?  What about information that is created by combining information—whether we are talking about Wikipedia or targeted advertising?  Does everyone or no one own it?

And by ownership, do we mean the rights to derive all value from it, even when what makes information valuable is the combining, processing, analyzing and repurposing done by others?  Doesn’t that part of the value generation count for something in divvying up the monetization of the resulting information products and services?  Or perhaps everything?

Human beings need metaphors to discuss intangible concepts like immortality, depression, and information.  But increasingly I believe that the property metaphor applied to information is doing more harm than good.  It makes every conversation about privacy a conversation of generalizations, and generalizations encourage the visceral responses that make it impossible to make any progress.

Perhaps that’s why survey after survey reveals both that consumers care very much about the erosion of a zone of privacy in their increasingly digital lives and, at the same time, give up intimate information the moment a website asks them for it.  (I agree with everything and its opposite.)

There’s also a more insidious use of language and metaphor to steer the conversation toward one view of property or another—privacy as personal property or privacy as community property.  Consider, for example, how the question is asked, e.g.:

“My cell phone tracks where I go”

or

“My cell phone can tell me where I am.”

A recent series of articles in The Wall Street Journal dealing with privacy (I won’t bother linking to it, because the Journal believes the information in those articles is private and property and won’t share it unless you pay for a subscription, but here is a “free” transcript of a conversation with the author of the articles on NPR’s “Fresh Air”) made many factual errors in describing current practices in on-line advertising.  But those aside, what made the articles sensational was not so much what they reported but the adjectives and pronouns that went with the facts.

Companies know a lot “about you,” for example, from your web surfing habits (in fact they know nothing about “you,” but rather about your computer, whoever may be using it), cookies are a kind of “surveillance technology” that “track” where “you” go and what “you do,” and often “spawn” themselves without “your” knowledge.

Assumptions about the meaning of loaded terms such as ownership, identity and what it means for information to be private poison the conversation.  But anyone raising that point is immediately accused of shilling for corporations or law enforcement agencies who don’t want the conversation to happen at all.

A User and Use-based Model – Productive and Destructive Uses

So if the property metaphor is failing to advance an important conversation—both of a business and policy nature—what metaphor works better?

As I wrote in “Laws of Disruption,” I think a better way to talk about information as an economic good is to focus on information users and information uses.  “Private” information, for starters, is private only depending on the potential user.  Whether it is our spouse, employer, an advertiser or a law-enforcement agent, in other words, can make all the difference in the world as to whether I consider some information private or not.  Context is nearly everything.

Example:  Is location tracking software on cell phones or embedded chips an invasion of privacy?  It is if a government agency is intercepting the signals, and using them to (fill in the blank).  But ask a parent who is trying to find a missing child, or an adult child trying to find a missing and demented parent.  It’s not the technology; it’s the user and the use.

Use, likewise, often empties much of the emotional baggage that goes with conversations about privacy in the abstract.  A website asks for my credit card number—is that an invasion of my privacy?  Well not if I’m trying to pay for my new television set from Amazon with a credit card.   On the other hand, if I’m signing up for an email newsletter that is free, there’s certainly something suspicious about the question.

To simplify a long discussion, I prefer to talk about information of all varieties through a lens of “productive” (uses that add value to information, e.g., collaboration) and “destructive” (uses that reduce the value of information, e.g., “identity” “theft”).  Though it may not be a perfect metaphor (many uses can be both productive and destructive, and the metrics for weighing both are undeveloped at best), I find it works much better in conversations about the business and policy of information.

That is, assuming one isn’t simply in the mood to vent and rant, which can also be fun, if not productive.

New white paper from PFF on Title II "sins"

The Progress and Freedom Foundation has just published a white paper I wrote for them titled “The Seven Deadly Sins of Title II Reclassification (NOI Remix).”  This is an expanded and revised version of an earlier blog post that looks deeply into the FCC’s pending Notice of Inquiry regarding broadband Internet access. You can download a PDF here.

I point out that beyond the danger of subjecting broadband Internet to extensive new regulations under the so-called “Third Way” approach outlined by FCC Chairman Julius Genachowski, a number of other troubling features in the Notice indicate an even broader agenda for the agency with regard to the Internet.

These include:

  • Pride: As the FCC attempts to define what services would be subjected to reclassification, the agency runs the risk of both under- and over-inclusion, which could harm consumers, network operators, and content and applications providers.
  • Lust: The agency is reaching out for additional powers beyond its reclassification proposals — including an effort to wrest privacy enforcement powers from the Federal Trade Commission and putting itself in charge of cybersecurity for homeland security.
  • Anger: The “Third Way” may dramatically expand the scope of federal wiretapping laws, requiring law enforcement “back doors” for a wide range of products and services.
  • Gluttony: Reclassifying broadband opens the door to state and local government regulation, which would overwhelm Internet access with a deluge of conflicting, and innovation-killing, laws, rules and new consumer taxes.
  • Sloth: As the FCC looks for a legal basis to defend reclassification, basic activities — such as caching, searching, and browsing — may for the first time be included in the category of services subject to “common carrier” regulation.
  • Vanity: Though wireless networks face greater challenges from the broadband Internet than wireline networks, the FCC seems poised to impose more, not less, regulation on wireless broadband.
  • Greed: Reclassification of broadband services could vastly expand the contribution base for the Universal Service Fund, adding new consumer fees while supersizing this important, but exceedingly wasteful, program.

I’m grateful to PFF, especially Berin Szoka, Adam Marcus, Mike Wendy and Adam Thierer, for their interest and help in publishing the article.

Deconstructing the Google-Verizon Framework

I’ve just published a long analysis for CNET of the proposed legislative framework presented yesterday by Google and Verizon.

The proposal has generated howls of anguish from the usual suspects (see Cecilia Kang, “Silicon Valley criticizes Google-Verizon accord” in The Washington Post; Matthew Lasar’s “Google-Verizon NN Pact Riddled with Loopholes” on Ars Technica and Marguerite Reardon’s “Net neutrality crusaders slam Verizon, Google” at CNET for a sampling of the vitriol).

But after going through the framework and comparing it more-or-less line for line with what the FCC proposed back in October, I found there were very few significant differences.  Surprisingly, much of the outrage being unleashed against the framework relates to provisions and features that are identical to the FCC’s Notice of Proposed Rulemaking (NPRM), which of course many of those yelling the loudest ardently support.

At the outset, one obvious difference that many reporters and commentators keep missing (in some cases, intentionally), is that the Google-Verizon framework has absolutely no legal significance.  It’s not a treaty, accord, agreement, deal, pact, contract or business arrangement—all terms still being used to describe it.  It doesn’t bind anyone to do anything, including Google and Verizon.

All that was released yesterday was a legislative proposal they hope will be taken up by lawmakers who actually have the authority to write legislation.  But you’d think from some of the commentary that this was the Internet equivalent of the secret treaty between Germany and Russia at the start of World War II.  Some commentators sound genuinely disappointed that something more nefarious, as had been widely and wildly reported last week, didn’t emerge.

Summary – Compare and Contrast

Let’s start with the similarities, described in more detail in the CNET piece:

  • Both propose neutrality rules that are nearly identical, including the no blocking for lawful content, no blocking lawful devices, network management transparency, and nondiscrimination.  Of these, only the wording of the nondiscrimination rule is different (more on that below).
  • Both limit the application of the rules to principles of reasonable network management.
  • Both exclude from application of the rules certain IP-based services that may run on the same infrastructure but which are offered to business or consumer customers as paid services, such as digital cable or digital voice today and others perhaps tomorrow.  The NPRM calls these “managed or specialized services,” the framework refers to them as “differentiated services.”
  • Both propose that the FCC enforce the rules by adjudicating complaints on a “case-by-case” basis.
  • Both recognize that some classes of Internet content (e.g., voice and video) must receive priority treatment to maintain their integrity, and don’t consider such prioritization by class to be a violation of the rules.
  • Both encourage the resolution of network management and other neutrality related disputes through technical organizations, engineering task forces, and other kinds of self-regulation, much as the Internet protocols have always been developed and maintained.

Again, much of the ire raised at the framework relates to aspects for which there is no material difference with the NPRM.

Now let’s get to the differences:

  • The Google-Verizon framework would exclude wireless broadband Internet from application of the rules, at least for now.  Though the NPRM recognized there were significant limits to the existing wireless infrastructure (spectrum, speed, coverage, towers) that made it more difficult to allow customers to use whatever bandwidth-hogging applications they wanted, the NPRM came down on the side of applying the rules to wireless.  This was perhaps the most contentious feature of the NPRM, judging from the comments filed.

Google has notably changed its tune on wireless broadband.  In the joint filing with the FCC on the NPRM, the companies acknowledged this was an area where they held opposite views—Google believed the rules should apply to wireless broadband, Verizon did not.  Now both agree that applying the rules here would do more harm than good, if only until the market and technology evolve further.

  • The framework would deny the FCC the power to expand or enhance the rules through further rulemakings.  Though the framework is admittedly not at its clearest here, what Google and Verizon seem to have in mind is that Congress, not the FCC, would enact the neutrality rules into law and give the FCC the power to enforce them.

But the FCC would remain unable to make its own rules or otherwise regulate broadband Internet access, the current state of the law as was most recently affirmed by the D.C. Circuit in the Comcast case.  The framework, in other words, joins the chorus arguing against the FCC’s effort to reclassify broadband under Title II and also imagines the NPRM would not be completed.

Reasonable Network Management

Let me just highlight one area of common wording that has received a great deal of negative feedback as applied to the framework and one area of difference.

Consider the definitions of “reasonable network management” that appear in both documents.

First, the NPRM:

Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.

We understand the term “nondiscriminatory” to mean that a broadband Internet access service provider may not charge a content, application, or service provider for enhanced or prioritized access to the subscribers of the broadband Internet access service provider, as illustrated in the diagram below. We propose that this rule would not prevent a broadband Internet access service provider from charging subscribers different prices for different services.

Reasonable network management consists of: (a) reasonable practices employed by a provider of broadband Internet access service to (i) reduce or mitigate the effects of congestion on its network or to address quality-of-service concerns; (ii) address traffic that is unwanted by users or harmful; (iii) prevent the transfer of unlawful content; or (iv) prevent the unlawful transfer of content; and (b) other reasonable network management practices.

Now, the Google-Verizon framework:

Broadband Internet access service providers are permitted to engage in reasonable network management. Reasonable network management includes any technically sound practice: to reduce or mitigate the effects of congestion on its network; to ensure network security or integrity; to address traffic that is unwanted by or harmful to users, the provider’s network, or the Internet; to ensure service quality to a subscriber; to provide services or capabilities consistent with a consumer’s choices; that is consistent with the technical requirements, standards, or best practices adopted by an independent, widely-recognized Internet community governance initiative or standard-setting organization; to prioritize general classes or types of Internet traffic, based on latency; or otherwise to manage the daily operation of its network.

Note here that the “unwanted by or harmful to users” language, for which the framework was skewered yesterday, appears in nearly identical form in the NPRM.

Nondiscrimination

Here’s how the FCC’s “nondiscrimination” rule was proposed:

Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.

And here it is from the framework:

In providing broadband Internet access service, a provider would be prohibited from engaging in undue discrimination against any lawful Internet content, application, or service in a manner that causes meaningful harm to competition or to users.  Prioritization of Internet traffic would be presumed inconsistent with the non-discrimination standard, but the presumption could be rebutted.

That certainly sounds different (with the addition of “undue” as a qualifier and the requirement of a showing of “meaningful harm”), but here’s the FCC’s explanation of what it means by nondiscrimination and the limits that would apply under the NPRM:

We understand the term “nondiscriminatory” to mean that a broadband Internet access service provider may not charge a content, application, or service provider for enhanced or prioritized access to the subscribers of the broadband Internet access service provider….We propose that this rule would not prevent a broadband Internet access service provider from charging subscribers different prices for different services..

We believe that the proposed nondiscrimination rule, subject to reasonable network management and understood in the context of our proposal for a separate category of “managed” or “specialized” services (described below), may offer an appropriately light and flexible policy to preserve the open Internet. Our intent is to provide industry and consumers with clearer expectations, while accommodating the changing needs of Internet-related technologies and business practices. Greater predictability in this area will enable broadband providers to better plan for the future, relying on clear guidelines for what practices are consistent with federal Internet policy. First, as explained in detail below in section IV.H, reasonable network management would provide broadband Internet access service providers substantial flexibility to take reasonable measures to manage their networks, including but not limited to measures to address and mitigate the effects of congestion on their networks or to address quality-of-service needs, and to provide a safe and secure Internet experience for their users. We also recognize that what is reasonable may be different for different providers depending on what technologies they use to provide broadband Internet access service (e.g., fiber optic networks differ in many important respects from 3G and 4G wireless broadband networks). We intend reasonable network management to be meaningful and flexible.

Second, as explained below in section IV.G, we recognize that some services, such as some services provided to enterprise customers, IP-enabled “cable television” delivery, facilities-based VoIP services, or a specialized telemedicine application, may be provided to end users over the same facilities as broadband Internet access service, but may not themselves be an Internet access service and instead may be classified as distinct managed or specialized services. These services may require enhanced quality of service to work well. As these may not be “broadband Internet access services,” none of the principles we propose would necessarily or automatically apply to these services.

In this context, with a flexible approach to reasonable network management, and understanding that managed or specialized services, to which the principles do not apply in part or full, may be offered over the same facilities as those used to provide broadband Internet access service, we believe that the proposed approach to nondiscrimination will promote the goals of an open Internet.

Though the FCC doesn’t use the words “undue” and “meaningful harm,” the qualifying comments seem to suggest something quite similar.  So are the differences actually meaningful in the end?  Meaningful enough to generate so much sturm and drang?  You make the call.

Copyright Office Weighs in on Awkward Questions of Software Law

I dashed off a piece for CNET today on the Copyright Office’s cell phone “jailbreaking” rulemaking earlier this week.  Though there has already been extensive coverage (including solid pieces in The Washington Post, a New York Times editorial, CNET, and Techdirt), there were a few interesting aspects to the decision I thought were worth highlighting.

Most notably, I was interested that no one had discussed the possibility and process by which Apple or other service providers could appeal the rulemaking.  Ordinarily, parties who object to rules enrolled by administrative agencies can file suit in federal district court under the Administrative Procedures Act.  Such suits are difficult to win, as courts give deference to administrative determinations and review them only for errors of law.  But a win for the agency is by no means guaranteed.

The Appeals Process

What I found in interviewing several leading high tech law scholars and practitioners is that no one was really clear how or even if that process applied to the Copyright Office.  In the twelve years that the Register of Copyrights has been reviewing requests for exemptions, there are no reported cases of efforts to challenge those rules and have them overturned.

With the help of Fred von Lohmann, I was able to obtain copies of briefs in a 2006 lawsuit filed by TracFone Wireless that challenged an exemption (modified and extended in Monday’s rulemaking) allowing cell phone users to unlock their phones from an authorized network in hopes of moving to a different network.  TracFone sued the Register in a Florida federal district court, claiming that both the process and substance of the exemption violated the APA and TracFone’s due process rights under the Fifth Amendment.

But the Justice Department, in defending the Copyright Office, made some interesting arguments.  They claimed, for example, that until TracFone suffered a particular injury as a result of the rulemaking, the company had no standing to sue.  Moreover, the government argued that the Copyright Office is not subject to the APA at all, since it is an organ of Congress and not a regulatory agency.  The briefs hinted at the prospect that rulemakings from the Copyright Office are not subject to judicial review of any kind, even one subject to the highly limited standard of “arbitrary and capricious.”

There was, however, no published opinion in the TracFone case, and EFF’s Jennifer Granick told me yesterday she believes the company simply abandoned the suit.  No opinion means the judge never ruled on any of these arguments, and so there is still no precedent for how a challenge to a DMCA rulemaking would proceed and under what legal standards and jurisdictional requirements.

Should Apple decide to pursue an appeal (an Apple spokesperson “declined to comment” on whether the company was considering such an action, and read me the brief statement the company has given to all journalists this week), it would be plowing virgin fields in federal jurisdiction.  That, as we know, can often lead to surprising results—including, just as an example, a challenge to the Copyright Office’s institutional ability to perform rulemakings of any kind.

The Copyright Office Moves the Fair Use Needle…a Little

A few thoughts on the substance of the rulemaking, especially as it shines light on growing problems in applying copyright law in the digital age.

Since the passage of the 1998 revisions to the Copyright Act known as the Digital Millennium Copyright Act, the Register of Copyrights is required every three years to review requests to create specific classes of exemptions to some of the key provisions of the law, notably the parts that prohibit circumvention of security technologies such as DRM or other forms of copy protection.

The authors of the DMCA with some foresight recognized that the anti-circumvention provisions rode on the delicate and sharp edge where static law meets rapidly-evolving technology and new business innovation.  Congress wanted to make sure there was a process that ensured the anti-circumvention provisions did not lead to unintended consequences that hindered rather than encouraged technological innovation.  So the Copyright Office reviews requests for exemptions with that goal in mind.

In the rulemaking completed on Monday, of course, one important exemption approved by the Register was one proposed by the Electronic Frontier Foundation, which asked for an exemption for “jailbreaking” cell phones, especially iPhones.

Jailbreaking allows the customer to override security features of the iPhone’s firmware that limits which third party applications can be added to the phone.  Apple strictly controls which third party apps can be downloaded to the phone through the App Store, and has used that control to ban apps with, for example, political or sexual content.  Of course the review process also ensures that the apps work are technically compatible with the phone’s other software, don’t unduly harm performance, and aren’t duplicative of other apps already approved.

Jailbreaking the phone allows the customer to add whatever apps they want, including those rejected by or simply never submitted to Apple in the first place, for whatever reason.

In approving the exemption, the Copyright Office noted that jailbreaking probably does involve copyright infringement.  The firmware must be altered as part of the process, and that alteration violates Apple’s legal monopoly on derivative or adapted works.  But the Register found that such alteration was de minimis and approved the exemption based on the concept of “fair use.”

Fair use, codified in Section 107 of the Copyright Act, holds that certain uses of a copyrighted work that would otherwise be reserved to the rights holder are not considered infringement.  These include uses that have positive social benefits but which the rights holder as a monopolist might be averse to permitting under any terms, such as quotations in a potentially-negative review.

EFF had argued initially that jailbreaking was not infringement at all, but the Register rejected that argument.  Fair use is a much weaker rationale, as it begins by acknowledging a violation, though one excused by law.  The law of fair use, as I note in the piece, has also been in considerable disarray since the 1980’s, when courts began to focus almost exclusively on whether the use (technically, fair use is an affirmative defense to a claim of infringement) harmed the potential commercial prospects for the work.

Courts are notoriously bad at evaluating product markets, let alone future markets.  So copyright holders now simply argue that future markets, thanks to changing technology, could include anything, and that therefore any use has the potential to harm the commercial prospects of their work.  So even noncommercial uses by people who have no intention of “competing” with the market for the work are found to have infringed, fair use notwithstanding.

But in granting the jailbreaking exemption, the Copyright Office made the interesting and important distinction between the market for the work and the market for the product or service in which the work is embedded.

Jailbreaking, of course, has the potential to seriously undermine the business strategy Apple has carefully designed for the iPhone and, indeed, for all of its products, which is to tightly control the ecosystem of uses for that product.

This ensures product quality, on the one hand, but it also means Apple is there to extract fees and tolls from pretty much any third party they want to, on technical and economic terms they can dictate.  Despite its hip reputation, Apple’s technical environment is more “closed” than Microsoft’s.  (The open source world of Linux being on the other end of the spectrum.)

In granting the exemption, the Copyright Office rejected Apple’s claim that jailbreaking harmed the market for the iPhone.  The fair use analysis, the Register said, focuses on the market for the protected work, which in this case is the iPhone’s firmware.  Since the modifications needed to jailbreak the firmware don’t harm the market for the firmware itself, the infringing use is fair and legally excused.   It doesn’t matter, in other words, that jailbreaking has a potentially big commercial impact on the iPhone service.

That distinction is the notable feature of this decision in terms of copyright law.  Courts, and now the Copyright Office, are well aware that technology companies try to leverage the monopoly rights granted by copyright to create legal monopolies on uses of their products or services.  In essence, they build technical controls into the copyrighted work that limits who and how the product or service can be used, than claim their intentional incompatibilities are protected by law.

A line of cases involving video game consoles, printer cartridges and software applications generally has been understandably skeptical of efforts to use copyright in this manner, which quickly begins to smell of antitrust.  Copyright is a monopoly—that is, a trust.  So it’s not surprising that its application can leak into concerns over antitrust.  The law strives to balance the need for the undesirable monopoly (incentives for authors) with the risks to related markets (restraint of trade).

As Anthony Falzone put it in a blog post at the Stanford Center for Internet and Society, “The Library went on to conclude there is no basis for Apple to use copyright law to ‘protect[] its restrictive business model’ and the concerns Apple articulated about the integrity of the iPhone’s ‘ecosystem’ are simply not harms that would tilt the fair use analysis Apple’s way.”

The exemption granted this week follows the theory that protecting the work itself is what matters, not the controlled market that ownership of the work allows the rights holder to create.

The bottom line here:  messing with the firmware is a fair use because it doesn’t damage the market for the firmware, regardless of (or perhaps especially because of) its impact on the market for the iPhone service as Apple has designed it.  That decision is largely consistent with case law evaluating other forms of technical lockout devices.

The net result is that it becomes harder for companies to use copyright as a legal mechanism to fend off third parties who offer replacement parts, add-ons, or other features that require jailbreaking to ensure compatibility.

Which is not to say that Apple or anyone else trying to control the environment around copyright-protected software is out of luck.  As I note in the CNET piece, the DMCA is just one, and perhaps the weakest arrow in Apple’s quiver here.  Just because jailbreaking has now been deemed a fair use does not mean Apple is forced to accommodate any third party app.  Not by a long shot.

Jailbreaking the iPhone remains a breach of the user agreement for both the device and the service.  It still voids the warranty and still exposes the customer to action, including cancelling the service or early termination penalties, that Apple can legally take to enforce the agreement.  Apple can also still take technical measures, such as refusing to update or upgrade jailbroken phones, to keep out unapproved apps.

Contrary to what many comments have said in some of the articles noted above, the DMCA exemption does not constitute a “get out of jail free” card for users.

It’s true that Apple can no longer rely on the DMCA (and the possibility of criminal enforcement by the government) to protect the closed environment of the iPhone.  But consumers can still waive legal rights—including the right to fair use—in agreeing to a contract, license agreement, or service agreement.  (In some sense that’s what a contract is, after all—agreement by two parties to waive various rights in the interest of a mutual bargain.)

Ownership Rights to Software Remain a Mystery

A third interesting aspect to the Copyright Office’s rulemaking has to do with the highly-confused question of software ownership. For largely technical reasons, software has moved from intangible programs that must of necessity be copied to physical media (tapes, disks, cartridges) in order to be distributed to intangible programs distributed electronically (software as a service, cloud computing, etc.).  That technical evolution has made the tricky problem of ownership has gotten even trickier.

Under copyright law, the owner of a “copy” of a work has certain rights, including the right to resell their copy.  The so-called “first sale doctrine” makes legal the secondary market for copies, including used book and record stores, and much of what gets interesting on Antiques Roadshow.

But the right to resell a copy of the work does not affect the rights holders’ ability to limit the creation of new copies, or of derivative or adapted works based on the original.  For example, I own several pages of original artwork used in 1960’s comic books drawn by Jack Kirby, Steve Ditko, and Gene Colan.

While Marvel still owns the copyright to the pages, I own the artifacts—the pages themselves.  I can resell the pages or otherwise display the artifact, but I have no right until copyright expires to use the art to produce and sell copies or adaptations, any more than the owner of a licensed Mickey Mouse t-shirt can make Mickey Mouse cartoons.

(Mike Masnick the other day had an interesting post about a man who claims to have found unpublished lost negatives made by famed photographer Ansel Adams.  Assuming the negatives are authentic and there’s no evidence they were stolen at some point, the owner has the right to sell the negatives.  But copyright may still prohibit him from using the negatives to make or sell prints of any kind.)

Software manufacturers and distributors are increasingly trying to make the case that their customers no longer receive copies of software but rather licenses to use software owned by the companies.  A license is a limited right to make use of someone else’s property, such as a seat in a movie theater or permission to drive a car.

As software is increasingly disconnected from embodiment in physical media, the legal argument for license versus sale gets stronger, and it may be over time that this debate will be settled in favor of the license model, which comes with different and more limited rights for the licensee than the sale of a copy.  (There is no “first sale” doctrine for licenses.  They can be canceled under terms agreed to in advance by the parties.)

For now, however, debate rages as to whether and under what conditions the use of software constitutes the sale of a copy versus a license to use.  That issue was raised in this week’s rulemaking several times, notably in a second exemption dealing with unlocking phones from a particular network.

Under Section 117 of the Copyright Act, the “owner of a copy” of a computer program has certain special rights, including the right to make a copy of the software (e.g. for backup purposes, or to move it from inert media to RAM) or modify it when doing so is “essential” to make use of the copy.

Unlocking a phone to move it to another network, particularly a used phone being recycled, necessarily requires at least minor modification, and the question becomes whether the recycler or anyone lawfully in possession of a cell phone “owns a copy” of the firmware.

Though this issue gave the Copyright Office great pause and lots of pages of analysis, ultimately they sensibly hedged on the question of copy versus license.  The Register did note, however, that Apple’s license agreement was “not a model of clarity.”

In the interests of time, let me just say here that this is an issue that will continue to plague the software industry for some time to come.  It is a great example of how innovation continues to outpace law, with unhappy and unintended consequences.  For more on that subject, see Law Seven (copyright) and Law Nine (software) of “The Laws of Disruption.”

After the deluge, more deluge

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET News.com and The Washington Post.

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

If the size of my stacks are any indication of activity level, the most contentious areas of legal debate are, not surprisingly, privacy (Facebook, Google, Twitter et. al.), infrastructure (Net neutrality, Title II and the wireless spectrum crisis), copyright (the secret ACTA treaty, Limewire, Google v. Viacom), free speech (China, Facebook “hate speech”), and cyberterrorism (Sen. Lieberman’s proposed legislation expanding executive powers).

There was relatively little development in other key topics, notably antitrust (Intel and the Federal Trade Commission appear close to resolution of the pending investigation; Comcast/NBC merger plodding along).  Cyberbullying, identity theft, spam, e-personation and other Internet crimes have also gone eerily, or at least relatively, quiet.

Where are We?

There’s one thing that all of the high-volume topics have in common—they are all moving increasingly toward a single topic, and that is the appropriate balance between private and public control over the Internet ecosystem.  When I first started researching cyberlaw in the mid-1990’s, that was truly an academic question, one discussed by very few academics.

But in the interim, TCP/IP, with no central authority or corporate owner, has pursued a remarkable and relentless takeover of every other networking standard.  The Internet’s packet-switched architecture has grown from simple data file exchanges to email, the Web, voice, video, social network and the increasingly hybrid forms of information exchanges performed by consumers and businesses.

As its importance to both economic and personal growth has expanded, anxiety over how and by whom that architecture is managed has understandably developed in parallel.

(By the way, as Morgan Stanley analyst Mark Meeker pointed out this spring, consumer computing has overtaken business computing as the dominant use of information technology, with a trajectory certain to open a wider gap in the future.)

The locus of the infrastructure battle today, of course, is in the fundamental questions being asked about the very nature of digital life.  Is the network a piece of private property operated subject to the rules of the free market, the invisible hand, and a wondrous absence of transaction costs?  Or is it a fundamental element of modern citizenship, overseen by national governments following their most basic principles of governance and control?

At one level, that fight is visible in the machinations between governments (U.S. vs. E.U. vs. China, e.g.) over what rules apply to the digital lives of their citizens.  Is the First Amendment, as John Perry Barlow famously said, only a local ordinance in Cyberspace?  Do E.U. privacy rules, being the most expansive, become the default for global corporations?

At another level, the lines have been drawn even sharper between public and private parties, and in side-battles within those camps.  Who gets to set U.S. telecom policy—the FCC or Congress, federal or state governments, public sector or private sector, access providers or content providers?  What does it really mean to say the network should be “nondiscriminatory,” or to treat all packets anonymously and equally, following a “neutrality” principle?

As individuals, are we consumers or citizens, and in either case how do we voice our view of how these problems should be resolved?  Through our elected representatives?  Voting with our wallets?  Through the media and consumer advocates?

Not to sound too dramatic, but there’s really no other way to see these fights as anything less than a struggle for the soul of the Internet.  As its importance has grown, so have the stakes—and the immediacy—in establishing the first principles, the Constitution, and the scriptures that will define its governance structure, even as it continues its rapid evolution.

The Next Wave

Network architecture and regulation aside, the other big problems of the day are not as different as they seem.  Privacy, cybersecurity and copyright are all proxies in that larger struggle, and in some sense they are all looking at the same problem through a slightly different (but equally mis-focused) lens.  There’s a common thread and a common problem:  each of them represents a fight over information usage, access, storage, modification and removal.  And each of them is saddled with terminology and a legal framework developed during the Industrial Revolution.

As more activities of all possible varieties migrate online, for example, very different problems of information economics have converged under the unfortunate heading of “privacy,” a term loaded with 19th and 20th century baggage.

Security is just another view of the same problems.  And here too the debates (or worse) are rendered unintelligible by the application of frameworks developed for a physical world.  Cyberterror, digital warfare, online Pearl Harbor, viruses, Trojan Horses, attacks—the terminology of both sides assumes that information is a tangible asset, to be secured, protected, attacked, destroyed by adverse and identifiable combatants.

In some sense, those same problems are at the heart of struggles to apply or not the architecture of copyright created during the 17th Century Enlightenment, when information of necessity had to take physical form to be used widely.  Increasingly, governments and private parties with vested interests are looking to the ISPs and content hosts to act as the police force for so-called “intellectual property” such as copyrights, patents, and trademarks.  (Perhaps because it’s increasingly clear that national governments and their physical police forces are ineffectual or worse.)

Again, the issues are of information usage, access, storage, modification and removal, though the rhetoric adopts the unhelpful language of pirates and property.

So, in some weird and at the same time obvious way, net neutrality = privacy = security = copyright.  They’re all different and equally unhelpful names for the same (growing) set of governance issues.

At the heart of these problems—both of form and substance—is the inescapable fact that information is profoundly different than traditional property.  It is not like a bush or corn or a barrel of oil.  For one thing, it never has been tangible, though when it needed to be copied into media to be distributed it was easy enough to conflate the media for the message.

The information revolution’s revolutionary principle is that information in digital form is at last what it was always meant to be—an intangible good, which follows a very different (for starters, a non-linear) life-cycle.  The ways in which it is created, distributed, experienced, modified and valued don’t follow the same rules that apply to tangible goods, try as we do to force-fit those rules.

Which is not to say there are no rules, or that there can be no governance of information behavior.  And certainly not to say information, because it is intangible, has no value.  Only that for the most part, we have no real understanding of what its unique physics are.  We barely have vocabulary to begin the analysis.

Now What?

Terminology aside, I predict with the confidence of Moore’s Law that business and consumers alike will increasingly find themselves more involved than anyone wants to be in the creation of a new body of law better-suited to the realities of digital life.  That law may take the traditional forms of statutes, regulations, and treaties, or follow even older models of standards, creeds, ethics and morals.  Much of it will continue to be engineered, coded directly into the architecture.

Private enterprises in particular can expect to be drawn deeper (kicking and screaming perhaps) into fundamental questions of Internet governance and information rights.

Infrastructure and application providers, as they take on more of the duties historically thought to be the domain of sovereigns, are already being pressured to maintain the environmental conditions for a healthy Internet.  Increasingly, they will be called upon to define and enforce principles of privacy and human rights, to secure the information environment from threats both internal (crime) and external (war), and to protect “property” rights in information on behalf of “owners.”

These problems will continue to be different and the same, and will be joined by new problems as new frontiers of digital life are opened and settled.  Ultimately, we’ll grope our way toward the real question:  what is the true nature of information and how can we best harness its power?

Cynically, it’s lifetime employment for lawyers.  Optimistically, it’s a chance to be a virtual founding father.  Which way you look at it will largely determine the quality of the work you do in the next decade or so.

The Seven Deadly Sins of Title II Reclassification (NOI Remix)

Better late than never, I’ve finally given a close read to the Notice of Inquiry issued by the FCC on June 17th.  (See my earlier comments, “FCC Votes for Reclassification, Dog Bites Man”.)  In some sense there was no surprise to the contents; the Commission’s legal counsel and Chairman Julius Genachowski had both published comments over a month before the NOI that laid out the regulatory scheme the Commission now has in mind for broadband Internet access.

Chairman Genachowski’s “Third Way” comments proposed an option that he hoped would satisfy both extremes.  The FCC would abandon efforts to find new ways to meet its regulatory goals using “ancillary jurisdiction” under Title I (an avenue the D.C. Circuit had wounded, but hadn’t actually exterminated, in the Comcast decision), but at the same time would not go as far as some advocates urged and put broadband Internet completely under the telephone rules of Title II.

Instead, the Commission would propose a “lite” version of Title II, based on a few guiding principles:

  • Recognize the transmission component of broadband access service—and only this component—as a telecommunications service;
  • Apply only a handful of provisions of Title II (Sections 201, 202, 208, 222, 254, and 255) that, prior to the Comcast decision, were widely believed to be within the Commission’s purview for broadband;
  • Simultaneously renounce—that is, forbear from—application of the many sections of the Communications Act that are unnecessary and inappropriate for broadband access service; and
  • Put in place up-front forbearance and meaningful boundaries to guard against regulatory overreach.

The NOI pretends not to take a position on any of three possible options – (1) stick with Title I and find a way to make it work, (2) reclassify broadband and apply the full suite of Title II regulations to Internet access providers, or (3) compromise on the Chairman’s Third Way, applying Title II but forbearing on any but the six sections noted above—at least, for now (see ¶ 98).  It asks for comments on all three options, however, and for a range of extensions and exceptions within each.

I’ve written elsewhere (see “Reality Check on ‘Reclassifying’ Broadband” and  “Net Neutrality and the Inconvenient Constitution”) about the dubious legal foundation on which the FCC rests its authority to change the definition of “information services” to suddenly include broadband Internet, after successfully (and correctly) convincing the U.S. Supreme Court that it did not.  That discussion will, it seems, have to wait until its next airing in federal court following inevitable litigation over whatever course the FCC takes.

This post deals with something altogether different—a number of startling tidbits that found their way into the June 17th NOI.  As if Title II weren’t dangerous enough, there are hints and echoes throughout the NOI of regulatory dreams to come.  Beyond the hubris of reclassification, here are seven surprises buried in the 116 paragraphs of the NOI—its seven deadly sins.  In many cases the Commission is merely asking questions.  But the questions hint at a much broader—indeed overwhelming—regulatory agenda that goes beyond Net Neutrality and the undoing of the Comcast decision.

Pride:  The folly of defining “facilities-based” provisioning – The FCC is struggling to find a way to apply reclassification only to the largest ISPs – Comcast, AT&T, Verizon, Time Warner, etc.  But the statutory definition of “telecommunications” doesn’t give them much help.  So the NOI invents a new distinction, referred to variously as “facilities-based” providers (¶ 1) or providers of an actual “physical connection,” (¶ 106) or limiting the application of Title II just to the “transmission component” of a provider’s consumer offering (¶ 12).

All the FCC has in mind here is “a commonsense definition of broadband Internet service,” (¶ 107) (which they never provide), but in any case the devil is surely in the details.  First, it’s not clear that making that distinction would actually achieve the goal of applying the open Internet rules—network management, good or evil, largely occurs well above the transmission layers in the IP stack.

The sin here, however, is that of unintentional over-inclusion.  If Title II is applied to “facilities-based” providers, it could sweep in application providers who increasingly offer connectivity as a way to promote usage of their products.

Limiting the scope of reclassification just to “facilities-based” providers who sell directly to consumers doesn’t eliminate the risk of over-inclusion.  Some application providers, for example, offer a physical connection in partnership with an ISP (think Yahoo and Covad DSL service) and many large application providers own a good deal of fiber optic cable that could be used to connect directly with consumers.  (Think of Google’s promise to build gigabit test beds for select communities.)  Municipalities are still working to provide WiFi and WiMax connections, again in cooperation with existing ISPs.  (EarthLink planned several of these before running into financial and, in some cities, political trouble.)

There are other services, including Internet backbone provisioning, that could also fall into the Title II trap (see ¶ 64).  Would companies, such as Akamai, which offer caching services, suddenly find themselves subject to some or all of Title II?  (See ¶ 58)  How about Internet peering agreements (unmentioned in the NOI)?  Would these private contracts be subject to Title II as well?  (See ¶ 107)

Lust:  The lure of privacy, terrorism, crime, copyright – Though the express purpose of the NOI is to find a way to apply Title II to broadband, the Commission just can’t help lusting after some additional powers it appears interested in claiming for itself.  Though the Commissioners who voted for the NOI are adamant that the goal of reclassification is not to regulate “the Internet” but merely broadband access, the siren call of other issues on the minds of consumers and lawmakers may prove impossible to resist.

Recognizing, for example, that the Federal Trade Commission has been holding hearings all year on the problems of information privacy, the FCC now asks for comments about how it can use Title II authority to get into the game (¶ 39, 52, 82, 83, 96), promising of course to “complement” whatever actions the FTC is planning to take.

Cyberattacks and other forms of terrorism are also on the Commission’s mind.  In his separate statement, for example, Chairman Genachowski argues that the Comcast decision “raises questions about the right framework for the Commission to help protect against cyber-attacks.”

The NOI includes several references to homeland security and national defense—this in the wake of publicity surrounding Sen. Lieberman’s proposed law to give the President extensive emergency powers over the Internet.  (See Declan McCullaugh, “Lieberman Defends Emergency Net Authority Plan.”)  Lieberman’s bill puts the power squarely in the Department of Homeland Security—is the FCC hoping to use Title II to capture some of that power for itself?

And beyond shocking acts of terrorism, does the FCC see Title II as a license to require ISPs to help enforce other, lesser crimes, including copyright infringement, libel, bullying and cyberstalking, e-personation—and the rest?  Would Title II give the agency the ability to impose its content “decency” rules, limited today merely to broadcast television and radio, to Internet content, as Congress has unsuccessfully tried to help the Commission do on three separate occasions?

(Just as I wrote that sentence, the U.S. Court of Appeals for the Second Circuit ruled that the FCC’s recent effort to craft more aggressive indecency rules, applied to Janet Jackson’s nipple, violates the First Amendment.  The Commission is having quite a bad year in the courts!)

Anger:  Sharing the pain of CALEA – That last paragraph is admittedly speculation.  The NOI contains no references to copyright, crime, or indecency.  But here’s a law enforcement sin that isn’t speculative.  The NOI reminds us that separate from Title II, the FCC is required by law to enforce the Communications Assistance for Law Enforcement Act (CALEA). (¶ 89) CALEA is part of the rich tapestry of federal wiretap law, and requires “telecommunications carriers” to implement technical “back doors” that make it easier for federal law enforcement agencies to execute wiretapping orders.  Since 2005, the FCC has held that all facilities-based providers are subject to CALEA.

Here, the Commission assumes that reclassification would do nothing to change the broader application of CALEA already in place, and seeks comment on “this analysis.”  (¶ 89)  The Commission wonders how that analysis impacts its forbearance decisions, but I have a different question.  Assuming the definition of “facilities-based” Internet access providers is as muddled as it appears (see above), is the Commission intentionally or unintentionally extending the coverage of CALEA to anyone selling Internet “connectivity” to consumers, even those for whom that service is simply in the interest of promoting applications?

Again, would residents of communities participating in Google’s fiber optic test bed awake to discover that all of that wonderful data they are now pumping through the fiber is subject to capture and analysis by any law enforcement officer holding a wiretapping order?  Oops?

Gluttony:  The Insatiable Appetite of State and Local Regulators – Just when you think the worst is over, there’s a nasty surprise waiting at the end of the NOI.  Under Title II, the Commission reminds us, many aspects of telephone regulation are not exclusive to the FCC but are shared with state and even local regulatory agencies. 

Fortunately, to avoid the catastrophic effects of imposing perhaps hundreds of different and conflicting regulatory schemes to broadband Internet access, the FCC has the authority to preempt state and local regulations that conflict with FCC “decisions,” and to preempt the application of those parts of Title II the FCC may or may not forbear. 

But here’s the billion dollar question, which the NOI saves for the very last (¶ 109):  “Under each of the three approaches, what would be the limits on the states’ or localities’ authority to impose requirements on broadband Internet service and broadband Internet connectivity service?”

What indeed?  One of the provisions the FCC would not apply under the Third Way, for example, is § 253, which gives the Commission the authority to “preempt state regulations that prohibit the provision of telecommunications services.” (¶ 87)  So does the Third Way taketh federal authority only to giveth to state and local regulators?  Is the only way to avoid state and local regulations—oh, well, if you insist–to go to full Title II?  And might the FCC decide in any case to exercise their discretion, now or in the future, to allow local regulations of Internet connectivity?

What might those regulations look like?  One need only review the history of local telephone service to recall the rate-setting labyrinths, taxes, micromanagement of facilities investment and deployment decisions—not to mention the scourge of corruption, graft and other government crimes that have long accompanied the franchise process.  Want to upgrade your cable service?  Change your broadband provider?  Please file the appropriate forms with your state or local utility commission, and please be patient.

Fear-mongering?  Well, consider a proposal that will be voted on this summer at the annual meeting of the National Association of Utilities Commissioners.  (TC-1 at page 30)  The Commissioners will decide whether to urge the FCC to adopt what it calls a “fourth way” to fix the Net Neutrality problem.  Their description of the fourth way speaks for itself.  It would consist of:

“bi-jurisdictional regulatory oversight for broadband Internet connectivity service and broadband Internet service which recognizes the particular expertise of States in: managing front-line consumer education, protection and services programs; ensuring public safety; ensuring network service quality and reliability; collecting and mapping broadband service infrastructure and adoption data; designing and promoting broadband service availability and adoption programs; and implementing  competitively neutral pole attachment, rights-of-way and tower siting rules and programs.”

The proposal also asks the FCC, should it stick to the Third Way approach, to add in several other provisions left out of Chairman Genachowski’s list, including one (again, § 253) that would preserve the state’s ability to help out.

Or consider a proposal currently being debated by the California Public Utilities Commission.  California, likewise, would like to use reclassification as the key that unlocks the door to “cooperative federalism,” and has its own list of provisions the FCC ought not to forbear under the Third Way proposal.

Among other things, the CPUC’s general counsel is unhappy with the definition the FCC proposes for just who and what would be covered by Title II reclassification.  The CPUC proposal argues for a revised definition that “should be flexible enough to cover unforeseen technological [sic] in both the short- and long-term.”

The CPUC also proposes the FCC add to the list of those regulated by Title II providers Voice over Internet Protocol telephony, which is often a software application riding well above the “transmission” component of broadband access.

California is just the first (tax-starved) state I looked for.  I’m sure there are and will be others who will respond hungrily to the Commission’s invitation to “comment” on the appropriate role of state and local regulators under either a full or partial Title II regime.  (¶ 109, 110)

Sloth:  The sleeping giant of basic web functions – browsers, DNS lookup, and more – The NOI admits that the FCC is a bit behind the times when it comes to technical expertise, and they would like commenters to help them build a fuller record.  Specifically, ¶ 58 asks for help “to develop a current record on the technical and functional characteristics of broadband Internet service, and whether those characteristics have changed materially in the last decade.”

In particular, the NOI wants to know more about the current state of web browsers, DNS lookup services, web caching, and “other basic consumer Internet activities.”

Sounds innocent enough, but those are very loaded questions.  In the Brand X case, in which the U.S. Supreme Court agreed with the FCC that broadband Internet access over cable fit the definition of a Title I “information service” and not a Title II “telecommunications service,” browsers, DNS lookup and other “basic consumer Internet activities” were crucial to the analysis of the majority.  Because cable (and, later, it was decided, DSL) providers offered not simply a physical connection but also supporting or “enhanced” services to go with it—including DNS lookup, home pages, email support and the like—their offering to consumers was not simple common carriage.

Justice Scalia disagreed, and in dissent made the argument that cable Internet was in fact two separable offerings – the physical connection (the packet-switched network) and a set of information services that ran on top of that connection.  Consumers used some information services from the carrier, and some from other content providers (other web sites, e.g.).  Those information services were rightly left unregulated under Title I, but Congress intended the transmission component, according to Justice Scalia, to be treated as a common carrier “telecommunications service” under Title II.

The Third Way proposal in large part adopts the Scalia view of the Communications Act (see ¶ 20, 106), despite the fact that it was the FCC who argued vigorously against that view all along, and despite the fact that a majority of the Court agreed with them.

By asking these innocent questions about technical architecture, the FCC appears to be hedging its bets for a certain court challenge.   Any effort to reclassify broadband Internet access will generate long, complicated, and expensive litigation.  What, the courts will ask, has driven the FCC to make such an abrupt change in its interpretation of terms like “information service” whose statutory definitions haven’t changed since 1996?

We know it is little more than that the Chairman would like to undo the Comcast decision, of course, and thereafter complete the process of enrolling the open Internet rules proposed in October.  But in the event that proves an unavailing argument, it would be nice to be able to argue that the nature of the Internet and Internet access have fundamentally changed since 2005, when Brand X was decided.  If it’s clear that basic Internet services have become more distinct from the underlying physical connection, at least in the eyes of consumers, so much the better.

Or perhaps something bigger is lumbering lazily through the NOI.  Perhaps the FCC is considering whether “basic Internet activities” (browsing, searching, caching, etc.) have now become part of the definition of basic connectivity.  Perhaps Title II, in whole or in part, will apply not only to facilities-based providers, but to those who offer basic Internet services essential for web access.  (Why extend Title II to providers of “basic” information service?  See below, “Greed.”)  If so, the exception will swallow the rule, and just about everything else that makes the Internet ecosystem work.

Vanity:  The fading beauty of the cellular ingénue – Perhaps the most worrisome feature of the proposed open Internet rules is that they would apply with equal force to wired and wireless Internet access.  As any consumer knows, however, those two types of access couldn’t be more different. 

Infrastructure providers have made enormous progress in innovating improvements to existing infrastructure—especially the cable and copper networks.  New forms of access have also emerged, including fiber optic cable, satellite, WiFi/WiMax, and the nascent provisioning of broadband over power lines, which has particular promise in remote areas which may have no other option for access.

Broadband speeds are increasing, and there’s every expectation that given current technology and current investment plans, the National Broadband Plan’s goal of 100 million Americans with access to 100 mbps Internet speeds by 2010 will be reached without any public spending.

The wireless world, however, is a different place.  After years of underutilization of 3G networks by consumers who saw no compelling or “killer” apps worth using, the latest generation of portable computing devices (iPhone, Android, Blackberry, Windows) has reached the tipping point and well beyond.  Existing networks in many locations are overcommitted, and political resistance to additional cell tower and other facilities deployment is exacerbating the problem.

Just last week, a front page story in the San Francisco Chronicle reported on growing tensions between cell phone providers and residents who want new towers located anywhere but near where they live, go to school, shop, or work.  CTIA-The Wireless Association announced that it would no longer hold events in San Francisco, after the city council, led by Mayor Gavin Newsome, passed a “Cell Phone Right to Know” ordinance that requires retail disclosure of a phone’s specific adoption rate of emitted radiation.

Given the likely continued lagging of cellular deployment, it seems prudent to consider less stringent restrictions on network management for wireless than for wireline.  Under the open Internet rules, providers would be unable to limit or ban outright certain high-bandwidth data services, notably video services and peer-to-peer file sharing, that the network may simply be unable to support.  But the proposed open Internet rules will have none of that.

The NOI does note some of the significant differences between wired and wireless (¶ 102), but also reminds us that the limited spectrum for wireless signals affords them special powers to regulate the business practices of providers. (¶ 103)  Under Title III of the Communications Act, which applies to wireless, the FCC has and makes use of the power to ensure spectrum uses are serving a broad “public interest.”

In some ways, then, Title III gives the Commission powers to regulate wireless broadband access beyond what they would get from a reclassification to Title II.  So even if the FCC were to choose the first option and leave the current classification scheme alone, wireless broadband providers might still be subject to open Internet rules under Title III.  It would be ironic if the only broadband providers whose network management practices were to be scrutinized were those who needed the most flexibility.  But irony is nothing new in communications law.

One power, however, might elude the FCC, and therefore might give further weight to a scheme that would regulate wireless broadband under Title III and Title II.  Title III does not include the extension of Universal Service to wireless broadband (¶ 103).  This is a particular concern given the increased reliance of under-served and at-risk communities on cellular technologies for all their communications needs.  (See the recent Pew Internet & Society study for details.)

While the NOI asks for comment on whether and to what extent the FCC ought to treat wireless broadband differently and at a later time from wired services, the thrust of this section makes clear the Commission is thinking of more, not less regulation for the struggling cellular industry.

Greed:  Universal Service taxes – So what about Universal Service?  In an effort to justify the Title II reclassification as something more than just a fix to the Comcast case, the FCC has (with some hedging) suggested that D.C. Circuit’s ruling also calls into question the Commission’s ability to implement the National Broadband Plan, published only a few weeks prior to the decision in Comcast

At a conference sponsored by the Stanford Institute for Economic Policy Research that I attended, Chairman Genachowski was emphatic that nothing in Comcast constrained the FCC’s ability to execute the plan.

But in the run-up to the NOI, the rhetoric has changed.  Here the Chairman in his separate statement says only that “the recent court decision did not opine on the initiatives and policies that we have laid out transparently in the National Broadband Plan and elsewhere.”

Still, it’s clear that whether out of genuine concern or just for more political and legal cover, the Commission is trying to make the case that Comcast casts serious doubt on the Plan, and in particular the FCC’s recommendations for reform of the Universal Service Fund (USF).  (¶¶ 32-38).

Though the NOI politely recites the legal theories posed by several analysts for how USF reform could be done without any reclassification, the FCC is skeptical.  For the first and only time in the NOI, the FCC asks not for general comments on its existing authority to reform Universal Service but for the kind of evidence that would be “needed to successfully defend against a legal challenge to implementation of the theory.”

There is, of course, a great deal at stake.  The USF is fed by taxes paid by consumers as part of their telephone bills, and is used to subsidize telephone service to those who cannot otherwise afford it.  Some part of the fund is also used for the “E-Rate” program, which subsidizes Internet access for schools and libraries.

Like other parts of the fund, E-Rate has been the subject of considerable corruption.  As I noted in Law Four of “The Laws of Disruption,” a 2005 Congressional oversight committee labeled the then $2 billion E-Rate program, which had already spawned numerous criminal convictions for fraud, a disgrace, “completely [lacking] tangible measures of either effectiveness or impact.”

Today the USF collects $8 billion annually in consumer taxes, and there’s little doubt that the money is not being spent in a particularly efficient or useful way.  (See, for example, Cecilia Kang’s Washington Post article this week, “AT&T, Verizon get most federal aid for phone service.”)  The FCC is right to call for USF reform in the National Broadband Plan, and to propose repurposing the USF to subsidize basic Internet access as well as dial tone.  The needs for universal Internet access—employment, education, health care, government services, etc.—are obvious.

But what has this to do with Title II reclassification?  There’s no mention in the NOI of plans to extend the class of services or service providers obliged to collect the USF tax, which is to say there’s nothing to suggest a new tax on Internet access.  But Recommendation 8.10 of the NBP encourages just that.  The Plan recommends that Congress “broaden the USF contributions base” by finding some method of taxing broadband Internet customers.  (Congress has so far steadfastly resisted and preempted efforts to introduce any taxes on Internet access at the federal and state level.)

If Congress agreed with the FCC, broadband Internet access would someday be subject to taxes to help fund a reformed USF.  The bigger the category of providers included under Title II (the most likely collectors of such a tax), the bigger the USF.  The temptation to broaden the definition of affected companies from “facilities based” to something, as the California Public Utilities Commission put it, more “flexible,” would be tantalizing.

***

But other than these minor quibbles, the NOI offers nothing to worry about!