Category Archives: Privacy

New white paper from PFF on Title II "sins"

The Progress and Freedom Foundation has just published a white paper I wrote for them titled “The Seven Deadly Sins of Title II Reclassification (NOI Remix).”  This is an expanded and revised version of an earlier blog post that looks deeply into the FCC’s pending Notice of Inquiry regarding broadband Internet access. You can download a PDF here.

I point out that beyond the danger of subjecting broadband Internet to extensive new regulations under the so-called “Third Way” approach outlined by FCC Chairman Julius Genachowski, a number of other troubling features in the Notice indicate an even broader agenda for the agency with regard to the Internet.

These include:

  • Pride: As the FCC attempts to define what services would be subjected to reclassification, the agency runs the risk of both under- and over-inclusion, which could harm consumers, network operators, and content and applications providers.
  • Lust: The agency is reaching out for additional powers beyond its reclassification proposals — including an effort to wrest privacy enforcement powers from the Federal Trade Commission and putting itself in charge of cybersecurity for homeland security.
  • Anger: The “Third Way” may dramatically expand the scope of federal wiretapping laws, requiring law enforcement “back doors” for a wide range of products and services.
  • Gluttony: Reclassifying broadband opens the door to state and local government regulation, which would overwhelm Internet access with a deluge of conflicting, and innovation-killing, laws, rules and new consumer taxes.
  • Sloth: As the FCC looks for a legal basis to defend reclassification, basic activities — such as caching, searching, and browsing — may for the first time be included in the category of services subject to “common carrier” regulation.
  • Vanity: Though wireless networks face greater challenges from the broadband Internet than wireline networks, the FCC seems poised to impose more, not less, regulation on wireless broadband.
  • Greed: Reclassification of broadband services could vastly expand the contribution base for the Universal Service Fund, adding new consumer fees while supersizing this important, but exceedingly wasteful, program.

I’m grateful to PFF, especially Berin Szoka, Adam Marcus, Mike Wendy and Adam Thierer, for their interest and help in publishing the article.

After the deluge, more deluge

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET News.com and The Washington Post.

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

If the size of my stacks are any indication of activity level, the most contentious areas of legal debate are, not surprisingly, privacy (Facebook, Google, Twitter et. al.), infrastructure (Net neutrality, Title II and the wireless spectrum crisis), copyright (the secret ACTA treaty, Limewire, Google v. Viacom), free speech (China, Facebook “hate speech”), and cyberterrorism (Sen. Lieberman’s proposed legislation expanding executive powers).

There was relatively little development in other key topics, notably antitrust (Intel and the Federal Trade Commission appear close to resolution of the pending investigation; Comcast/NBC merger plodding along).  Cyberbullying, identity theft, spam, e-personation and other Internet crimes have also gone eerily, or at least relatively, quiet.

Where are We?

There’s one thing that all of the high-volume topics have in common—they are all moving increasingly toward a single topic, and that is the appropriate balance between private and public control over the Internet ecosystem.  When I first started researching cyberlaw in the mid-1990’s, that was truly an academic question, one discussed by very few academics.

But in the interim, TCP/IP, with no central authority or corporate owner, has pursued a remarkable and relentless takeover of every other networking standard.  The Internet’s packet-switched architecture has grown from simple data file exchanges to email, the Web, voice, video, social network and the increasingly hybrid forms of information exchanges performed by consumers and businesses.

As its importance to both economic and personal growth has expanded, anxiety over how and by whom that architecture is managed has understandably developed in parallel.

(By the way, as Morgan Stanley analyst Mark Meeker pointed out this spring, consumer computing has overtaken business computing as the dominant use of information technology, with a trajectory certain to open a wider gap in the future.)

The locus of the infrastructure battle today, of course, is in the fundamental questions being asked about the very nature of digital life.  Is the network a piece of private property operated subject to the rules of the free market, the invisible hand, and a wondrous absence of transaction costs?  Or is it a fundamental element of modern citizenship, overseen by national governments following their most basic principles of governance and control?

At one level, that fight is visible in the machinations between governments (U.S. vs. E.U. vs. China, e.g.) over what rules apply to the digital lives of their citizens.  Is the First Amendment, as John Perry Barlow famously said, only a local ordinance in Cyberspace?  Do E.U. privacy rules, being the most expansive, become the default for global corporations?

At another level, the lines have been drawn even sharper between public and private parties, and in side-battles within those camps.  Who gets to set U.S. telecom policy—the FCC or Congress, federal or state governments, public sector or private sector, access providers or content providers?  What does it really mean to say the network should be “nondiscriminatory,” or to treat all packets anonymously and equally, following a “neutrality” principle?

As individuals, are we consumers or citizens, and in either case how do we voice our view of how these problems should be resolved?  Through our elected representatives?  Voting with our wallets?  Through the media and consumer advocates?

Not to sound too dramatic, but there’s really no other way to see these fights as anything less than a struggle for the soul of the Internet.  As its importance has grown, so have the stakes—and the immediacy—in establishing the first principles, the Constitution, and the scriptures that will define its governance structure, even as it continues its rapid evolution.

The Next Wave

Network architecture and regulation aside, the other big problems of the day are not as different as they seem.  Privacy, cybersecurity and copyright are all proxies in that larger struggle, and in some sense they are all looking at the same problem through a slightly different (but equally mis-focused) lens.  There’s a common thread and a common problem:  each of them represents a fight over information usage, access, storage, modification and removal.  And each of them is saddled with terminology and a legal framework developed during the Industrial Revolution.

As more activities of all possible varieties migrate online, for example, very different problems of information economics have converged under the unfortunate heading of “privacy,” a term loaded with 19th and 20th century baggage.

Security is just another view of the same problems.  And here too the debates (or worse) are rendered unintelligible by the application of frameworks developed for a physical world.  Cyberterror, digital warfare, online Pearl Harbor, viruses, Trojan Horses, attacks—the terminology of both sides assumes that information is a tangible asset, to be secured, protected, attacked, destroyed by adverse and identifiable combatants.

In some sense, those same problems are at the heart of struggles to apply or not the architecture of copyright created during the 17th Century Enlightenment, when information of necessity had to take physical form to be used widely.  Increasingly, governments and private parties with vested interests are looking to the ISPs and content hosts to act as the police force for so-called “intellectual property” such as copyrights, patents, and trademarks.  (Perhaps because it’s increasingly clear that national governments and their physical police forces are ineffectual or worse.)

Again, the issues are of information usage, access, storage, modification and removal, though the rhetoric adopts the unhelpful language of pirates and property.

So, in some weird and at the same time obvious way, net neutrality = privacy = security = copyright.  They’re all different and equally unhelpful names for the same (growing) set of governance issues.

At the heart of these problems—both of form and substance—is the inescapable fact that information is profoundly different than traditional property.  It is not like a bush or corn or a barrel of oil.  For one thing, it never has been tangible, though when it needed to be copied into media to be distributed it was easy enough to conflate the media for the message.

The information revolution’s revolutionary principle is that information in digital form is at last what it was always meant to be—an intangible good, which follows a very different (for starters, a non-linear) life-cycle.  The ways in which it is created, distributed, experienced, modified and valued don’t follow the same rules that apply to tangible goods, try as we do to force-fit those rules.

Which is not to say there are no rules, or that there can be no governance of information behavior.  And certainly not to say information, because it is intangible, has no value.  Only that for the most part, we have no real understanding of what its unique physics are.  We barely have vocabulary to begin the analysis.

Now What?

Terminology aside, I predict with the confidence of Moore’s Law that business and consumers alike will increasingly find themselves more involved than anyone wants to be in the creation of a new body of law better-suited to the realities of digital life.  That law may take the traditional forms of statutes, regulations, and treaties, or follow even older models of standards, creeds, ethics and morals.  Much of it will continue to be engineered, coded directly into the architecture.

Private enterprises in particular can expect to be drawn deeper (kicking and screaming perhaps) into fundamental questions of Internet governance and information rights.

Infrastructure and application providers, as they take on more of the duties historically thought to be the domain of sovereigns, are already being pressured to maintain the environmental conditions for a healthy Internet.  Increasingly, they will be called upon to define and enforce principles of privacy and human rights, to secure the information environment from threats both internal (crime) and external (war), and to protect “property” rights in information on behalf of “owners.”

These problems will continue to be different and the same, and will be joined by new problems as new frontiers of digital life are opened and settled.  Ultimately, we’ll grope our way toward the real question:  what is the true nature of information and how can we best harness its power?

Cynically, it’s lifetime employment for lawyers.  Optimistically, it’s a chance to be a virtual founding father.  Which way you look at it will largely determine the quality of the work you do in the next decade or so.

The Seven Deadly Sins of Title II Reclassification (NOI Remix)

Better late than never, I’ve finally given a close read to the Notice of Inquiry issued by the FCC on June 17th.  (See my earlier comments, “FCC Votes for Reclassification, Dog Bites Man”.)  In some sense there was no surprise to the contents; the Commission’s legal counsel and Chairman Julius Genachowski had both published comments over a month before the NOI that laid out the regulatory scheme the Commission now has in mind for broadband Internet access.

Chairman Genachowski’s “Third Way” comments proposed an option that he hoped would satisfy both extremes.  The FCC would abandon efforts to find new ways to meet its regulatory goals using “ancillary jurisdiction” under Title I (an avenue the D.C. Circuit had wounded, but hadn’t actually exterminated, in the Comcast decision), but at the same time would not go as far as some advocates urged and put broadband Internet completely under the telephone rules of Title II.

Instead, the Commission would propose a “lite” version of Title II, based on a few guiding principles:

  • Recognize the transmission component of broadband access service—and only this component—as a telecommunications service;
  • Apply only a handful of provisions of Title II (Sections 201, 202, 208, 222, 254, and 255) that, prior to the Comcast decision, were widely believed to be within the Commission’s purview for broadband;
  • Simultaneously renounce—that is, forbear from—application of the many sections of the Communications Act that are unnecessary and inappropriate for broadband access service; and
  • Put in place up-front forbearance and meaningful boundaries to guard against regulatory overreach.

The NOI pretends not to take a position on any of three possible options – (1) stick with Title I and find a way to make it work, (2) reclassify broadband and apply the full suite of Title II regulations to Internet access providers, or (3) compromise on the Chairman’s Third Way, applying Title II but forbearing on any but the six sections noted above—at least, for now (see ¶ 98).  It asks for comments on all three options, however, and for a range of extensions and exceptions within each.

I’ve written elsewhere (see “Reality Check on ‘Reclassifying’ Broadband” and  “Net Neutrality and the Inconvenient Constitution”) about the dubious legal foundation on which the FCC rests its authority to change the definition of “information services” to suddenly include broadband Internet, after successfully (and correctly) convincing the U.S. Supreme Court that it did not.  That discussion will, it seems, have to wait until its next airing in federal court following inevitable litigation over whatever course the FCC takes.

This post deals with something altogether different—a number of startling tidbits that found their way into the June 17th NOI.  As if Title II weren’t dangerous enough, there are hints and echoes throughout the NOI of regulatory dreams to come.  Beyond the hubris of reclassification, here are seven surprises buried in the 116 paragraphs of the NOI—its seven deadly sins.  In many cases the Commission is merely asking questions.  But the questions hint at a much broader—indeed overwhelming—regulatory agenda that goes beyond Net Neutrality and the undoing of the Comcast decision.

Pride:  The folly of defining “facilities-based” provisioning – The FCC is struggling to find a way to apply reclassification only to the largest ISPs – Comcast, AT&T, Verizon, Time Warner, etc.  But the statutory definition of “telecommunications” doesn’t give them much help.  So the NOI invents a new distinction, referred to variously as “facilities-based” providers (¶ 1) or providers of an actual “physical connection,” (¶ 106) or limiting the application of Title II just to the “transmission component” of a provider’s consumer offering (¶ 12).

All the FCC has in mind here is “a commonsense definition of broadband Internet service,” (¶ 107) (which they never provide), but in any case the devil is surely in the details.  First, it’s not clear that making that distinction would actually achieve the goal of applying the open Internet rules—network management, good or evil, largely occurs well above the transmission layers in the IP stack.

The sin here, however, is that of unintentional over-inclusion.  If Title II is applied to “facilities-based” providers, it could sweep in application providers who increasingly offer connectivity as a way to promote usage of their products.

Limiting the scope of reclassification just to “facilities-based” providers who sell directly to consumers doesn’t eliminate the risk of over-inclusion.  Some application providers, for example, offer a physical connection in partnership with an ISP (think Yahoo and Covad DSL service) and many large application providers own a good deal of fiber optic cable that could be used to connect directly with consumers.  (Think of Google’s promise to build gigabit test beds for select communities.)  Municipalities are still working to provide WiFi and WiMax connections, again in cooperation with existing ISPs.  (EarthLink planned several of these before running into financial and, in some cities, political trouble.)

There are other services, including Internet backbone provisioning, that could also fall into the Title II trap (see ¶ 64).  Would companies, such as Akamai, which offer caching services, suddenly find themselves subject to some or all of Title II?  (See ¶ 58)  How about Internet peering agreements (unmentioned in the NOI)?  Would these private contracts be subject to Title II as well?  (See ¶ 107)

Lust:  The lure of privacy, terrorism, crime, copyright – Though the express purpose of the NOI is to find a way to apply Title II to broadband, the Commission just can’t help lusting after some additional powers it appears interested in claiming for itself.  Though the Commissioners who voted for the NOI are adamant that the goal of reclassification is not to regulate “the Internet” but merely broadband access, the siren call of other issues on the minds of consumers and lawmakers may prove impossible to resist.

Recognizing, for example, that the Federal Trade Commission has been holding hearings all year on the problems of information privacy, the FCC now asks for comments about how it can use Title II authority to get into the game (¶ 39, 52, 82, 83, 96), promising of course to “complement” whatever actions the FTC is planning to take.

Cyberattacks and other forms of terrorism are also on the Commission’s mind.  In his separate statement, for example, Chairman Genachowski argues that the Comcast decision “raises questions about the right framework for the Commission to help protect against cyber-attacks.”

The NOI includes several references to homeland security and national defense—this in the wake of publicity surrounding Sen. Lieberman’s proposed law to give the President extensive emergency powers over the Internet.  (See Declan McCullaugh, “Lieberman Defends Emergency Net Authority Plan.”)  Lieberman’s bill puts the power squarely in the Department of Homeland Security—is the FCC hoping to use Title II to capture some of that power for itself?

And beyond shocking acts of terrorism, does the FCC see Title II as a license to require ISPs to help enforce other, lesser crimes, including copyright infringement, libel, bullying and cyberstalking, e-personation—and the rest?  Would Title II give the agency the ability to impose its content “decency” rules, limited today merely to broadcast television and radio, to Internet content, as Congress has unsuccessfully tried to help the Commission do on three separate occasions?

(Just as I wrote that sentence, the U.S. Court of Appeals for the Second Circuit ruled that the FCC’s recent effort to craft more aggressive indecency rules, applied to Janet Jackson’s nipple, violates the First Amendment.  The Commission is having quite a bad year in the courts!)

Anger:  Sharing the pain of CALEA – That last paragraph is admittedly speculation.  The NOI contains no references to copyright, crime, or indecency.  But here’s a law enforcement sin that isn’t speculative.  The NOI reminds us that separate from Title II, the FCC is required by law to enforce the Communications Assistance for Law Enforcement Act (CALEA). (¶ 89) CALEA is part of the rich tapestry of federal wiretap law, and requires “telecommunications carriers” to implement technical “back doors” that make it easier for federal law enforcement agencies to execute wiretapping orders.  Since 2005, the FCC has held that all facilities-based providers are subject to CALEA.

Here, the Commission assumes that reclassification would do nothing to change the broader application of CALEA already in place, and seeks comment on “this analysis.”  (¶ 89)  The Commission wonders how that analysis impacts its forbearance decisions, but I have a different question.  Assuming the definition of “facilities-based” Internet access providers is as muddled as it appears (see above), is the Commission intentionally or unintentionally extending the coverage of CALEA to anyone selling Internet “connectivity” to consumers, even those for whom that service is simply in the interest of promoting applications?

Again, would residents of communities participating in Google’s fiber optic test bed awake to discover that all of that wonderful data they are now pumping through the fiber is subject to capture and analysis by any law enforcement officer holding a wiretapping order?  Oops?

Gluttony:  The Insatiable Appetite of State and Local Regulators – Just when you think the worst is over, there’s a nasty surprise waiting at the end of the NOI.  Under Title II, the Commission reminds us, many aspects of telephone regulation are not exclusive to the FCC but are shared with state and even local regulatory agencies. 

Fortunately, to avoid the catastrophic effects of imposing perhaps hundreds of different and conflicting regulatory schemes to broadband Internet access, the FCC has the authority to preempt state and local regulations that conflict with FCC “decisions,” and to preempt the application of those parts of Title II the FCC may or may not forbear. 

But here’s the billion dollar question, which the NOI saves for the very last (¶ 109):  “Under each of the three approaches, what would be the limits on the states’ or localities’ authority to impose requirements on broadband Internet service and broadband Internet connectivity service?”

What indeed?  One of the provisions the FCC would not apply under the Third Way, for example, is § 253, which gives the Commission the authority to “preempt state regulations that prohibit the provision of telecommunications services.” (¶ 87)  So does the Third Way taketh federal authority only to giveth to state and local regulators?  Is the only way to avoid state and local regulations—oh, well, if you insist–to go to full Title II?  And might the FCC decide in any case to exercise their discretion, now or in the future, to allow local regulations of Internet connectivity?

What might those regulations look like?  One need only review the history of local telephone service to recall the rate-setting labyrinths, taxes, micromanagement of facilities investment and deployment decisions—not to mention the scourge of corruption, graft and other government crimes that have long accompanied the franchise process.  Want to upgrade your cable service?  Change your broadband provider?  Please file the appropriate forms with your state or local utility commission, and please be patient.

Fear-mongering?  Well, consider a proposal that will be voted on this summer at the annual meeting of the National Association of Utilities Commissioners.  (TC-1 at page 30)  The Commissioners will decide whether to urge the FCC to adopt what it calls a “fourth way” to fix the Net Neutrality problem.  Their description of the fourth way speaks for itself.  It would consist of:

“bi-jurisdictional regulatory oversight for broadband Internet connectivity service and broadband Internet service which recognizes the particular expertise of States in: managing front-line consumer education, protection and services programs; ensuring public safety; ensuring network service quality and reliability; collecting and mapping broadband service infrastructure and adoption data; designing and promoting broadband service availability and adoption programs; and implementing  competitively neutral pole attachment, rights-of-way and tower siting rules and programs.”

The proposal also asks the FCC, should it stick to the Third Way approach, to add in several other provisions left out of Chairman Genachowski’s list, including one (again, § 253) that would preserve the state’s ability to help out.

Or consider a proposal currently being debated by the California Public Utilities Commission.  California, likewise, would like to use reclassification as the key that unlocks the door to “cooperative federalism,” and has its own list of provisions the FCC ought not to forbear under the Third Way proposal.

Among other things, the CPUC’s general counsel is unhappy with the definition the FCC proposes for just who and what would be covered by Title II reclassification.  The CPUC proposal argues for a revised definition that “should be flexible enough to cover unforeseen technological [sic] in both the short- and long-term.”

The CPUC also proposes the FCC add to the list of those regulated by Title II providers Voice over Internet Protocol telephony, which is often a software application riding well above the “transmission” component of broadband access.

California is just the first (tax-starved) state I looked for.  I’m sure there are and will be others who will respond hungrily to the Commission’s invitation to “comment” on the appropriate role of state and local regulators under either a full or partial Title II regime.  (¶ 109, 110)

Sloth:  The sleeping giant of basic web functions – browsers, DNS lookup, and more – The NOI admits that the FCC is a bit behind the times when it comes to technical expertise, and they would like commenters to help them build a fuller record.  Specifically, ¶ 58 asks for help “to develop a current record on the technical and functional characteristics of broadband Internet service, and whether those characteristics have changed materially in the last decade.”

In particular, the NOI wants to know more about the current state of web browsers, DNS lookup services, web caching, and “other basic consumer Internet activities.”

Sounds innocent enough, but those are very loaded questions.  In the Brand X case, in which the U.S. Supreme Court agreed with the FCC that broadband Internet access over cable fit the definition of a Title I “information service” and not a Title II “telecommunications service,” browsers, DNS lookup and other “basic consumer Internet activities” were crucial to the analysis of the majority.  Because cable (and, later, it was decided, DSL) providers offered not simply a physical connection but also supporting or “enhanced” services to go with it—including DNS lookup, home pages, email support and the like—their offering to consumers was not simple common carriage.

Justice Scalia disagreed, and in dissent made the argument that cable Internet was in fact two separable offerings – the physical connection (the packet-switched network) and a set of information services that ran on top of that connection.  Consumers used some information services from the carrier, and some from other content providers (other web sites, e.g.).  Those information services were rightly left unregulated under Title I, but Congress intended the transmission component, according to Justice Scalia, to be treated as a common carrier “telecommunications service” under Title II.

The Third Way proposal in large part adopts the Scalia view of the Communications Act (see ¶ 20, 106), despite the fact that it was the FCC who argued vigorously against that view all along, and despite the fact that a majority of the Court agreed with them.

By asking these innocent questions about technical architecture, the FCC appears to be hedging its bets for a certain court challenge.   Any effort to reclassify broadband Internet access will generate long, complicated, and expensive litigation.  What, the courts will ask, has driven the FCC to make such an abrupt change in its interpretation of terms like “information service” whose statutory definitions haven’t changed since 1996?

We know it is little more than that the Chairman would like to undo the Comcast decision, of course, and thereafter complete the process of enrolling the open Internet rules proposed in October.  But in the event that proves an unavailing argument, it would be nice to be able to argue that the nature of the Internet and Internet access have fundamentally changed since 2005, when Brand X was decided.  If it’s clear that basic Internet services have become more distinct from the underlying physical connection, at least in the eyes of consumers, so much the better.

Or perhaps something bigger is lumbering lazily through the NOI.  Perhaps the FCC is considering whether “basic Internet activities” (browsing, searching, caching, etc.) have now become part of the definition of basic connectivity.  Perhaps Title II, in whole or in part, will apply not only to facilities-based providers, but to those who offer basic Internet services essential for web access.  (Why extend Title II to providers of “basic” information service?  See below, “Greed.”)  If so, the exception will swallow the rule, and just about everything else that makes the Internet ecosystem work.

Vanity:  The fading beauty of the cellular ingénue – Perhaps the most worrisome feature of the proposed open Internet rules is that they would apply with equal force to wired and wireless Internet access.  As any consumer knows, however, those two types of access couldn’t be more different. 

Infrastructure providers have made enormous progress in innovating improvements to existing infrastructure—especially the cable and copper networks.  New forms of access have also emerged, including fiber optic cable, satellite, WiFi/WiMax, and the nascent provisioning of broadband over power lines, which has particular promise in remote areas which may have no other option for access.

Broadband speeds are increasing, and there’s every expectation that given current technology and current investment plans, the National Broadband Plan’s goal of 100 million Americans with access to 100 mbps Internet speeds by 2010 will be reached without any public spending.

The wireless world, however, is a different place.  After years of underutilization of 3G networks by consumers who saw no compelling or “killer” apps worth using, the latest generation of portable computing devices (iPhone, Android, Blackberry, Windows) has reached the tipping point and well beyond.  Existing networks in many locations are overcommitted, and political resistance to additional cell tower and other facilities deployment is exacerbating the problem.

Just last week, a front page story in the San Francisco Chronicle reported on growing tensions between cell phone providers and residents who want new towers located anywhere but near where they live, go to school, shop, or work.  CTIA-The Wireless Association announced that it would no longer hold events in San Francisco, after the city council, led by Mayor Gavin Newsome, passed a “Cell Phone Right to Know” ordinance that requires retail disclosure of a phone’s specific adoption rate of emitted radiation.

Given the likely continued lagging of cellular deployment, it seems prudent to consider less stringent restrictions on network management for wireless than for wireline.  Under the open Internet rules, providers would be unable to limit or ban outright certain high-bandwidth data services, notably video services and peer-to-peer file sharing, that the network may simply be unable to support.  But the proposed open Internet rules will have none of that.

The NOI does note some of the significant differences between wired and wireless (¶ 102), but also reminds us that the limited spectrum for wireless signals affords them special powers to regulate the business practices of providers. (¶ 103)  Under Title III of the Communications Act, which applies to wireless, the FCC has and makes use of the power to ensure spectrum uses are serving a broad “public interest.”

In some ways, then, Title III gives the Commission powers to regulate wireless broadband access beyond what they would get from a reclassification to Title II.  So even if the FCC were to choose the first option and leave the current classification scheme alone, wireless broadband providers might still be subject to open Internet rules under Title III.  It would be ironic if the only broadband providers whose network management practices were to be scrutinized were those who needed the most flexibility.  But irony is nothing new in communications law.

One power, however, might elude the FCC, and therefore might give further weight to a scheme that would regulate wireless broadband under Title III and Title II.  Title III does not include the extension of Universal Service to wireless broadband (¶ 103).  This is a particular concern given the increased reliance of under-served and at-risk communities on cellular technologies for all their communications needs.  (See the recent Pew Internet & Society study for details.)

While the NOI asks for comment on whether and to what extent the FCC ought to treat wireless broadband differently and at a later time from wired services, the thrust of this section makes clear the Commission is thinking of more, not less regulation for the struggling cellular industry.

Greed:  Universal Service taxes – So what about Universal Service?  In an effort to justify the Title II reclassification as something more than just a fix to the Comcast case, the FCC has (with some hedging) suggested that D.C. Circuit’s ruling also calls into question the Commission’s ability to implement the National Broadband Plan, published only a few weeks prior to the decision in Comcast

At a conference sponsored by the Stanford Institute for Economic Policy Research that I attended, Chairman Genachowski was emphatic that nothing in Comcast constrained the FCC’s ability to execute the plan.

But in the run-up to the NOI, the rhetoric has changed.  Here the Chairman in his separate statement says only that “the recent court decision did not opine on the initiatives and policies that we have laid out transparently in the National Broadband Plan and elsewhere.”

Still, it’s clear that whether out of genuine concern or just for more political and legal cover, the Commission is trying to make the case that Comcast casts serious doubt on the Plan, and in particular the FCC’s recommendations for reform of the Universal Service Fund (USF).  (¶¶ 32-38).

Though the NOI politely recites the legal theories posed by several analysts for how USF reform could be done without any reclassification, the FCC is skeptical.  For the first and only time in the NOI, the FCC asks not for general comments on its existing authority to reform Universal Service but for the kind of evidence that would be “needed to successfully defend against a legal challenge to implementation of the theory.”

There is, of course, a great deal at stake.  The USF is fed by taxes paid by consumers as part of their telephone bills, and is used to subsidize telephone service to those who cannot otherwise afford it.  Some part of the fund is also used for the “E-Rate” program, which subsidizes Internet access for schools and libraries.

Like other parts of the fund, E-Rate has been the subject of considerable corruption.  As I noted in Law Four of “The Laws of Disruption,” a 2005 Congressional oversight committee labeled the then $2 billion E-Rate program, which had already spawned numerous criminal convictions for fraud, a disgrace, “completely [lacking] tangible measures of either effectiveness or impact.”

Today the USF collects $8 billion annually in consumer taxes, and there’s little doubt that the money is not being spent in a particularly efficient or useful way.  (See, for example, Cecilia Kang’s Washington Post article this week, “AT&T, Verizon get most federal aid for phone service.”)  The FCC is right to call for USF reform in the National Broadband Plan, and to propose repurposing the USF to subsidize basic Internet access as well as dial tone.  The needs for universal Internet access—employment, education, health care, government services, etc.—are obvious.

But what has this to do with Title II reclassification?  There’s no mention in the NOI of plans to extend the class of services or service providers obliged to collect the USF tax, which is to say there’s nothing to suggest a new tax on Internet access.  But Recommendation 8.10 of the NBP encourages just that.  The Plan recommends that Congress “broaden the USF contributions base” by finding some method of taxing broadband Internet customers.  (Congress has so far steadfastly resisted and preempted efforts to introduce any taxes on Internet access at the federal and state level.)

If Congress agreed with the FCC, broadband Internet access would someday be subject to taxes to help fund a reformed USF.  The bigger the category of providers included under Title II (the most likely collectors of such a tax), the bigger the USF.  The temptation to broaden the definition of affected companies from “facilities based” to something, as the California Public Utilities Commission put it, more “flexible,” would be tantalizing.

***

But other than these minor quibbles, the NOI offers nothing to worry about!

The Fallacy of “e-personation” Laws

I was interviewed yesterday for the local Fox affiliate on Cal. SB1411, which criminalizes online impersonations (or “e-personation”) under certain circumstances.

On paper, of course, this sounds like a fine idea.  As Palo Alto State Senator Joe Simitian, the bill’s sponsor, put it, “The Internet makes many things easier.  One of those, unfortunately, is pretending to be someone else.  When that happens with the intent of causing harm, folks need a law they can turn to.”

Or do they?

The Problem with New Laws for New Technology

SB1411 would make a great exam question of short paper assignment for an information law course.  It’s short, is loaded with good intentions, and on first blush looks perfectly reasonable—just extending existing harassment, intimidation and fraud laws to the modern context of online activity.  Unfortunately, a careful read reveals all sorts of potential problems and unintended consequences.

A number of states have passed new laws in the wake of highly-publicized cyberstalking and bullying incidents, including the tragic case involving a young girl’s suicide after being dumped by her online MySpace boyfriend, who turned out to be a made-up character created for the purpose of hurting her feelings.  (I’ve written about the case before, see “Lori Drew Verdict Finally Overturned.” )

Missouri passed a cyberbullying law when it turned out there was no federal law that covered the behavior in the MySpace case.  Texas and New York recently enacted laws similar to SB 1411, though the Texas law applies only to impersonation on social media sites.

The problem with all these laws generally is that the authors aren’t clear what behaviors exactly they are trying to criminalize.  And, mindful of the fact that the evolution of digital life is happening much faster than any legislative body can hope to keep up with, these laws are often written to be both too specific (the technology changes) and too broad (the behavior is undefined).  As a result, they often don’t wind up covering the behavior they intend to deter, and, left on the books, can often come back to life when prosecutors need something to hang a case on that otherwise doesn’t look illegal.

Given the proximity to free speech issues, the vagueness of many of these laws makes them good candidates for First Amendment challenges, and many have fallen on that sword.

California’s SB 1411 as a Case in Point

SB1411, which last week passed in the State Senate, suffers from all of these defects.  It punishes the impersonation of an “actual person through or on an Internet Web site or by other electronic means for purposes of harming, intimidating, threatening or defrauding another person.”  It requires the impersonator to knowingly commit the crime and do so without the consent of the person they are imitating.  It also requires that the impersonation be credible.  Punishment for violation can include a year in jail and a suit brought by the victim for punitive damages.

First let’s consider a few hypotheticals, starting with the one that inspired the law, the MySpace case noted above.  Since the boy whose profile lured the victim into an online romance that was then cruelly terminated was a made-up person (the perpetrators found some photo of a suitably shirtless teen and built a personality around it), SB 1411 would not apply had it been the law in Missouri.  The boy was not an “actual person,” and, except perhaps to a thirteen year old with existing mental health problems, may not have been credible either.  (The determination of “credibility” under SB 1411 would presumably be based on the “reasonable person” standard.)  Likewise, law enforcement agents creating fake Craigslist ads to smoke out drug buyers, child molesters, or customers of sex workers would also not be violating the law.

Also excluded from SB 1411 would probably be those who use Craigslist to get back at exes or other people they are angry at by placing ads promising sex to anyone who stops by, and then gives the address of the person they are trying to get even with.  In most cases, these ads are not credible impersonations of the victim; they are meant to offend them but not to convince a reasonable third person that they really speak for the victim.   A fake Facebook page for a teacher who proceeds to make cruel or otherwise harmful statements about her students, likewise, would not be a credible impersonation.

The Twitter profiles being created to issue fake press releases purportedly on behalf of BP would also not be illegal under SB 1411.  First, BP is not an “actual person.”  Second,  Twitter profiles such as BPGlobalPR are clearly parodies—they are issuing statements they believe to be what BP would say if it were telling the truth about its actions in relation to the gulf spill.  (“We’re on a seafood diet- When we see food, we eat it! Unless that food is seafood from the Gulf. Yuck to that.”)   Again, not a credible impersonation.

You also do not commit the crime by confusing people inadvertently.  There are several people I am aware of online named Larry Downes, including a New Jersey state natural resources regulator, a radio station executive and conservative commentator, a cage fighter and a veterinarian who lives in a nearby community.  (The latter is a distant cousin.)  Facebook alone has 11 profiles with my name.  Only one of them is actually me, but the others are not knowingly impersonating me just because they use the same name, even if some third person might be confused to my detriment.

Likewise, the statute doesn’t reach out to those who help the perpetrator, intentionally or otherwise.  The “Internet Web sites” or providers of other electronic means aren’t themselves subject to prosecution or civil cases brought by the victims of the impersonation.  So Craigslist, MySpace, Facebook, and Twitter aren’t liable here, nor are the ISPs of the perpetrators, even if made aware of the activity of their users and/or customers.

For one thing, a federal law, Section 230, immunizes providers against that kind of liability under most circumstances.  Last week, Craigslist lost its bid to preclude a California lawsuit using Section 230 as its defense when sued by the victim of fake posts soliciting sex and offering to give away his possessions.  The victim informed Craigslist of the problem, and the company promised to take action to stop future posts but did not succeed.  But it lost its immunity only by promising to help which, of course, the site won’t do in the future!  (See Eric Goldman’s analysis of the case.)

So there are important limitations (some added through recent amendments) to SB 1411 that reduce the possibility of its being applied to speech that is otherwise protected or immunized by federal law.  (In the BP example, the company might have a trademark case to bring.)  Most of these limits, however, seem to take any teeth out of the statute, and seem to exclude most of the behavior Sen. Simitian says he is concerned about.

Unintended Consequences

What’s left?  Imagine a case where, angry at you, I create a fake Facebook profile that purports to represent you.  I post material there that is not so outrageous that the impersonation is no longer credible, but which still has the intent of harming, intimidating, threatening or defrauding you.  Perhaps I report, pretending to be you, about all of my extravagant purchases (but not so extravagant that I am not credible), leading your friends to believe you are spending beyond your means.  You find out, and find my actions intimidating or threatening.

Perhaps I announce that you have defaulted on your mortgage and are being foreclosed, leading your creditors to seek security on your other debts.  Perhaps I threaten to continue posting stories of your sexual exploits, forcing you to pay me blackmail to save you embarrassment.

Would these cases be covered under SB 1411?  Perhaps, unless of course the claims that I am making as you turn out to be true.  In the U.S., truth is a defense to defamation, so even if my intent is to “harm” you by revealing these facts, if they are facts then there is no action for defamation.  That I say the facts pretending to be you, under SB 1411, would appear to turn a protected activity into a crime, perhaps not what the drafters intended and perhaps not something that would stand up in court.  (The truth-as-defense in defamation cases rests on First Amendment principles—you can’t be prosecuted for saying the truth.)

Of course, much of the other behavior I described above is already a crime in California—in particular, various forms of intimidation, harassment and, by definition, fraud.  The authors of SB 1411 believe the new law is needed to extend those crimes to cover the use of “Internet Web sites” and “other electronic means,” but there’s no reason to believe that the technology used is any bar to prosecutions under existing law.  (Indeed, the use of electronic communications to commit the acts would extend the possible criminal laws that apply, since electronic communications are generally considered interstate commerce and thus subject to federal as well as state laws.)

For the most part, then, SB 1411 covers very little new behavior, and little of the behavior its drafters thought needed to be criminalized.  For an impersonation to be damaging would, in most cases, mean that it was also not credible.  Pretending to be me and telling the truth could be harmful, but probably a form of protected speech.  Pretending to be me in order to defraud a third party is already a crime-that is the crime of identity theft.

Which is not to say, pun intended, that the proposed law is harmless.  For in addition to categories of behavior already covered by existing law, SB 1411 makes it a crime to impersonate someone with the purpose of “harming” “another person.”  There is, not surprisingly, no definition given for what it means to have the purpose of “harming,” nor is it clear if “another person” refers only to the person whose identity has been usurped, or includes some third party (perhaps a family member or friend of that person, perhaps their employer.)

Having a purpose of “harming” “another person” is incredibly vague, and can cover a wide range of behaviors that wouldn’t, in offline contexts, be subject to criminal prosecution.  The only difference would be that the intended harm here would be operationalized through online channels, and would take the form of a credible impersonation of some actual person.

Why those differences ought to result in a year in jail doesn’t make much sense.  Consequently, an attempt to use the law to prosecute “harmful” behavior would be met with a strong constitutional objection.

That’s my read of the bill, in any case.  Since I posed this as an exam question, I’m offering extra credit for anyone who can come up with examples—there are none given by the California State Senate—of situations where the law would actually apply and that would not already be illegal and which would not be subject to plausible Constitutional challenges.

The Privacy and Security Totentanz

I participated last week in a Techdirt webinar titled, “What IT needs to know about Law.”  (You can read Dennis Yang’s summary here, or follow his link to watch the full one-hour discussion.  Free registration required.)

The key message of  The Laws of Disruption is that IT and other executives need to know a great deal about law—and more all the time.  And Techdirt does an admirable job of reporting the latest breakdowns between innovation and regulation on a daily basis.  So I was happy to participate.

Legally-Defensible Security

Not surprisingly, there were far too many topics to cover in a single seminar, so we decided to focus narrowly on just one:  potential legal liability when data security is breached, whether through negligence (lost laptop) or the criminal act of a third party (hacking attacks).  We were fortunate to have as the main presenter David Navetta, founding partner with The Information Law Group, who had recently written an excellent article on what he calls “legally-defensible security” practices.

I started the seminar off with some context, pointing out that one of the biggest surprises for companies in the Internet age is the discovery that having posted a website on the World Wide Web, they are suddenly and often inappropriately subject to the laws and jurisdiction of governments around the world.   (How wide is the web? World.)

In the case of security breaches, for example, a company may be required to disclose the incident to affected third parties (customers, employees, etc.) under state law.  At the other extreme, executives of the company handling the data may be criminally-liable if the breach involved personally-identifiable information of citizens of the European Union (e.g., the infamous Google Video case in Italy earlier this year, which is pending appeal).  Individuals and companies affected by a breach may sue the company under a variety of common law claims, including breach of contract (perhaps the violation of a stated privacy policy) or simple negligence.

The move to cloud computing amplifies and accelerates the potential nightmares.  In the cloud model, data and processing are subcontracted over the network to a potentially-wide array of providers who offer economies of scale, application or functional expertise, scalable hardware or proprietary software.  Data is everywhere, and its disclosure can occur in an exploding number of inadvertent ways.  If a security breach occurs in the course of any given transaction, just untangling which parties handled the data—let alone who let it slip out—could be a logistical (and litigation) nightmare.

The Limits of Negligence

Not all security breaches involve private or personal information, but it’s not surprising that the most notable breakdowns (or at least the most vividly-reported) in security are those that expose consumer or citizen data, sometimes for millions of affected parties.  (Some of the most egregious losses have involved government computers left unsecured, with sensitive citizen data unencrypted on the hard drive.)  Consumer computing activity has surpassed corporate computing and is growing much faster.  Privacy and security are topics that are increasingly hard to disentangle

Which is not to say that the bungling of data that affects millions of users necessarily translates to legal consequences for the company who held the information.   Often, under current law, even the most irresponsible behavior by a data handler does not necessarily translate to liability.

For one thing, U.S. law does not require companies to spare no expense in protecting data.  As David Navetta points out, courts may find that despite a breach the precautions taken may have nonetheless been economically sensible, meaning that the precautions taken were justified given the likelihood of a breach and the potential consequences that followed.  Adherence to ISO or other industry standards on data security may be sufficient to insulate a company from liability—though not always.  (Courts sometimes find that industry standards are too lax.)

For the most part, tort law still follows the classic negligence formula of the beatified American jurist Learned Hand, who explained that the duty of courts was to encourage behavior by defendants that made economic sense.  If courts found liability any time a breach occurred, then data handlers would be incentivized to spend inefficient amounts of money on protecting it, leading to net social loss.  (The classic cases involved sparks from locomotives causing fire damage to crops—perfect avoidance of damage, the courts ruled, would cost too much relative to the harm caused and the probability of it occurring.)

That, at least, is the common law regime that applies in the U.S.  The E.U., under laws enacted in support of its 1995 Privacy Directive, follow a different rule, one that comes closer to product liability law, where any failure leads to per se liability for the manufacturer, or indeed for any company in the chain of sales to a consumer.

A case last week from the Ninth Circuit Court of Appeals, however, reminds us that a finding of liability doesn’t necessarily lead to an award of damages.  In Ruiz v. Gap, a job applicant whose personal information was lost when two laptop computers were stolen from a Gap vendor who was processing applications sued Gap, claiming to represent a class of applicants who were victims of the loss.

All of Ruiz’s claims, however, were rejected.  Affirming the lower court and agreeing with most other courts to consider the issue, the Ninth Circuit held that Ruiz could not sue Gap without a showing of “appreciable and actual damage.”  The cost of forward-looking credit monitoring didn’t count (Gap offered to pay Ruiz for that in any case), nor did speculative claims of future losses.  Actual losses, expressible and provable in monetary terms, were required.

The court also rejected claims under California state law and the state constitution, noting that an “invasion of privacy” does not occur until there is actual misuse of the data contained on the stolen laptops.  (Most laptop thefts are presumably motivated by the value of the hardware, not any data that might reside on the hard drive.)

As Eric Goldman succinctly points out, the Ninth Circuit case highlights some odd behavior by plaintiff class action lawyers in the recent hubbub involving Facebook, Google, and other companies who either change their privacy policies or who use customer data in ways that arguably violate that policy.  “[T]he most disturbing thing,” Eric writes, “is that so many plaintiffs’ lawyers seem completely uninterested in pleading how their clients suffered any consequence (negative or otherwise) from the gaffe at all. Their approach appears to be that the service provider broke a privacy promise, res ipsa loquitur, now write us a check containing a lot of zeros.”

A Surprising Lack of Law – And an Alternative Model for Redress

It’s not just the lawyers who are confused here.  U.S. consumers, riled up by stories in mainstream media, seem to live under the misapprehensions that they have some legal right to privacy, or that the protection of personal information that can be enforced in courts against corporations.

That is true in the E.U., but not in the U.S.  The Constitutional “right to privacy” detailed in U.S. Supreme Court decisions of the last fifty years only applies to protections against government behavior.  There is no Constitutional right to privacy that can be enforced against employers, business partners, corporations, parents, or anyone else.

What about statutes?  With a few specific exceptions for medical information, credit history, and a few other categories, there is also no U.S. or for the most part state law that protects consumer privacy against corporations.  There’s no law that requires a website to publish its privacy policy, let alone follow it.  Even if policy constitutes an enforceable contract (not entirely a settled matter of law), the Ruiz case reminds us that breach of contract is irrelevant without evidence of actual monetary damages.

Before storming the barricades demanding justice, however, keep in mind that the law is not the only source of a remedy.  (Indeed, law is rarely the most efficient or effective in any case.)

The lack of a legal remedy for misuse of private information doesn’t mean that companies can do whatever they like with data they collect, or need take no precautions to ensure that information isn’t lost or stolen.

As more and more personal and even intimate data migrates to the cloud, it has become crystal-clear that consumers are increasingly sensitive (perhaps, economically-speaking, over-sensitive) about what happens to it.  Consumers express their unhappiness in a variety of media, including social networking sites, blogs, emails, and tweets.  They can and do put economic pressure on companies whose behavior they find unacceptable:  boycotts, switching to other providers, and through activism that damages the brand of the miscreant.

Even if the law offers no remedy, in other words, the court of public opinion has proven quite effective.  Even without a court ordering them to do so, some of the largest data handlers have made drastic changes to their policies, software, and how they communicate with users.

Looming in the background of these stories is always the possibility that if companies fail to appease their customers, the customers will lobby their elected representatives to provide the kind of legal protections that so far haven’t proven necessary.  But given the mismatch between the pace of innovation and the pace of legal change, legislation should always be the last, not the first, resort.

So expect lots more stories about security breaches, and expect most of them to involve the potential disclosure of personal information.   (That’s one reason that laws requiring disclosure of breaches are a good idea.  Consumers can’t flex their power if they are kept in the dark about behavior they are likely to object to.)

And that means, as we conclude in the seminar, that IT executives making security decisions had better start talking to their counterparts in the general counsel’s office.

Because as hard as it is for those two groups to talk to each other, it’s much harder to have a conversation after a breach than before.  IT makes decisions that affect the legal position of the company; lawyers make decisions that affect the technical architecture of products and services.  The question isn’t whether to formulate a legally-defensible security policy, in other words, but when.

There’s Something About ECPA

I write in “The Laws of Disruption” of the risk of unintended consequences that regulators run in legislating emerging technologies.  Because the pace of change for these technologies is so much faster than it is for law, the likelihood of defining a legal problem and crafting a solution that will address it is very slim.  I give several examples in the book of regulatory actions that quickly become not just obsolete but, worse, wind up having the opposite result to what regulators intended.

An unfortunate example of that problem in the news quite a bit lately is the Electronic Communications Privacy Act or ECPA.   (My first published legal scholarship, in 1994, was an article about a provision of ECPA that allowed law enforcement officers to use evidence they came across by accident in the course of an otherwise lawful wiretap, see “Electronic Communications and the Plain View Exception:  More ‘Bad Physics.’”)

Passed in 1986, ECPA at the time was a model of smart lawmaking in response to changing technologies.  It updated the federal wiretap statute, known as Title III, to take into account the rise of cellular technologies and electronic messages–which didn’t exist when the original law was passed in 1968.

In essence, ECPA brought these new forms of communications under the legal controls of the wiretap law, meaning for example that police could not intercept cell phone transmissions without a warrant, just as under Title III they needed a warrant to intercept wireline calls.  Private interception was also made illegal.

Lost in the Clouds

A lot has happened since 1986, and unfortunately for the most part ECPA hasn’t kept up.  Most significant has been the explosion of new data sources of all varieties, and in particular the now billions (trillions?) of messages sent and received each day by individuals communicating through the Internet.  The potential evidence those messages contain for a variety of investigations—criminal, civil, terror-related—has made them an irresistible target for law enforcement as well as civil litigants.

In addition to the sheer volume of new data sources, the other significant change undermining ECPA’s assumptions has been the movement to cloud-based services, particularly for email.  In the early days of email (say, 1995), ISPs kept messages on their servers only until the user, through a client email program such as Eudora, downloaded the message to his or her personal computer.  Once downloaded, the message was immediately or soon after deleted from the server, if for no other reason than to save storage space.

Storage, however, has gotten cheap, and the potential uses of stored data for a variety of purposes has made it attractive for ISPs and other services (e.g., Google’s Gmail) to retain copies of messages and other user data on a permanent basis.

The drafters of ECPA had great foresight, but they couldn’t have imagined these changes.

Here come the unintended consequences.  Under the law, law enforcement agents hoping to get access to your emails as part of an investigation are required to obtain a warrant, just as they would need a warrant to search your home and seize your computer.

But for data stored on a third party computer—an ISP or other cloud provider—the warrant requirement applies only for “unopened” messages and only for 180 days after receipt.  Once the message is opened and 180 have passed, any stored data can be obtained without a warrant based on the much lower standard of a subpoena.

In some sense, this means that as users move to cloud computing they are inadvertently and unknowingly waiving protections against law enforcement uses of their data. Keep your data only locally on equipment in your home or office, and the police need a warrant to look at or take it.  Leave it in the cloud somewhere, and they can get at it without much fuss at all.

This turn of events, the result not of any secret conspiracy so much as the random confluence of technological inventions since 1986, is almost certainly not what the drafters of ECPA had in mind.  It is more likely to be just the opposite.  For ECPA, like the wiretap law it amended, was intended to give greater protection to communications than what the Fourth Amendment to the U.S. Constitution would otherwise have provided.

A Very Brief History of the Fourth Amendment in Cyberspace

The Fourth Amendment, recall, protects citizens from “unreasonable searches and seizures” by the government.  (We are, it bears emphasizing, talking ONLY about government access here—employers, parents, friends and companies are not subject to the Fourth Amendment.)

Which is to say, the Fourth Amendment is the absolute floor of citizen protections from government.  Title III and ECPA were intended to raise that floor for telephone and later data communications to something that gave citizens more, not less, privacy.

At some point, indeed, technology may push the law below the standards of the Fourth Amendment, making it unconstitutional.  That’s been a concern all along, from the beginning of the wiretap statute itself in 1968.  The passage of Title III followed landmark Supreme Court decisions in the Katz and Berger cases, in which the Court reversed the 1928 Olmstead case, which allowed the police to intercept phone calls of a suspect without a warrant.

The Olmstead decision, Justice Harlan wrote in his concurrence to Katz, was “bad physics as well as bad law, for reasonable expectations of privacy may be defeated by electronic as well as physical invasion,” 398 U.S. at 362 (1967).

Harlan’s phrasing has proven prophetic.  In order to avoid the metaphysical problem of explaining how electronic interception could constitute a “search” or a “seizure” when no physical property of the subject is involved, the Court focused instead on the “reasonable” part of the Fourth Amendment.

Search and seizure, the Court has held over the last fifty years, is really about privacy, and a “reasonable” expectation of privacy for any information law enforcement agents want to gather requires a warrant.  What part of a wiretap is a search and what part a seizure are questions neatly elided (though perhaps too neatly as we’ll see) by the “reasonable expectation of privacy” standard.

The privacy standard has proven at least somewhat resilient to changing technologies.  But with mainstream adoption of revolutionary information technologies comes changing expectations of what is reasonably expected to be “private” information.  Indeed, Olmstead can be seen as a perfectly understandable decision in light of the fact that in 1928 nearly all telephones were connected through party lines, where no caller had any expectation of privacy.

But that also means there is no absolute baseline for Fourth Amendment challenges (usually by a criminal defendant) to evidence collected by the government.  Again, Title III and ECPA can and did set a higher bar than was required as a constitutional minimum, but even as those intentions have been reversed by technology it does not automatically follow that ECPA is now below what the Fourth Amendment requires.

Absent special protections citizens may have had from ECPA, the question under Fourth Amendment jurisprudence becomes:  Do users who keep email and other data archived with ISPs and other cloud providers have an expectation of privacy?  Is that expectation reasonable?

The Ugly Details

Not surprisingly, courts are increasingly asked to weigh in on those questions, and the results are also not surprisingly inconclusive.  (David Couillard at Ars Technica reviewed some of the case law in a recent article, “The Cloud and the Future of the Fourth Amendment.”)

Earlier this month, the Department of Justice abandoned an attempt to avoid a search warrant even for mail messages less than 180 days old in a case that involved Yahoo mail.  (See Declan McCullagh, “DOJ Abandons Warrantless Attempts to Read Yahoo E-mail.”)

Google, which came to Yahoo’s defense, has begun disclosing just how many requests for information about its users it receives from various government agencies.  (See Jessica Vascellaro, “Google Discloses Requests on Users.”)

It’s also worth noting that sometimes technology goes the other way—making it harder for law enforcement officials to collect evidence and conduct investigations.  Encryption is a good example here—stronger encryption protocols make it easier for criminals to hide activity from the police.

Indeed, law enforcement and privacy advocates are in some sense always engaged in a complicated dance.  As technology constantly changes the delicate balance between the sanctity of private activity and the need for effective law enforcement, lawmakers are regularly asked by one side or the other (or both) to change the law to bring it back into something that satisfies both groups.

The Digital Due Process Coalition

The cloud computing problem has inspired the creation of an interesting coalition aimed at returning ECPA where its drafters intended to set the scales.  The group, called Digital Due Process, was launched in March and is calling for specific reforms of ECPA to take into account the reality of digital life in 2010.  (For those who want the legal details, the site includes an excellent analysis by my one-time boss Becky Burr, see “The Electronic Communications Privacy Act of 1986: Principles for Reform.”)

The Digital Due Process group is a remarkable coalition of organizations and corporations who might not otherwise be thought to agree on too many issues of technology policy.  It includes advocacy groups normally thought to be on the right or the left, including the ACLU, the Center for Democracy and Technology, the Progress and Freedom Foundation, the Electronic Frontier Foundation and the American Library Association.  Corporate members include Google, AT&T, Microsoft, eBay, and Intel.

One might think that with such specific recommendations and such a wide coalition of support from across the ideological spectrum that ECPA reform would be a slam dunk.  But of course that would ignore one very powerful lobby not represented by Digital Due Process–the lobby of law enforcement agencies.

These agencies almost certainly recognize that the move to cloud computing has given them unintended and unprecedented access to information otherwise protected by the law, but naturally they are loathe to let go of any advantage in the fight against crime.

Though there have been some calls in Congress for enacting the reforms called for by the coalition, the success of Digital Due Process is far from certain.  And even if the group does succeed, there’s no telling how long it will be before the scales become unbalanced yet again, or in whose favor, by the next set of disruptive information technologies to become mainstream.

As Thomas Jefferson said, “The price of freedom is eternal vigilance.”