Category Archives: Digital Life

Abuse of the CFAA: The Problem of Prosecutorial Indiscretion

With renewed interest in the failings of the Computer Fraud and Abuse Act and the role of prosecutorial discretion in its application in light of the tragic outcome in the Aaron Swartz case, I went back to what I wrote about the law in 2009.

Back then, the victim of both the poorly-drafted amendments to CFAA that expanded its scope from government to private computer networks and the politically-motivated zeal of federal prosecutors reaching for something—anything—with which to punish otherwise legal but disfavored behavior was trained on Lori Drew, a far less sympathetic defendant.

But the dangers lurking in the CFAA were just as visible in 2009 as they are today.  Those who have recently picked up the banner calling for reform of the law might ask themselves where they were back then, and why the ultimately unsuccessful Drew prosecution didn’t raise their hackles at the time.

The law was just as bad in 2009, and just as dangerously twisted by the government.  Indeed, the Drew case, as I wrote at the time, gave all the notice anyone needed of what was to come later.

Here’s the section of The Laws of Disruption from 2009 discussing CFAA:

What did Lori Drew do?

The late-forties suburban St. Louis mother was apparently unhappy about the “mean” behavior of Megan Meier, a thirteen-year-old former friend of Drew’s daughter Sarah. The Drews, along with Ashley Grills, the eighteen-year-old employee of Lori Drew’s home business, hatched a plan. They created a fake MySpace profile for a bare-chested sixteen-year-old boy named “Josh,” who would befriend Megan and encourage her to gossip about other girls. Then they would take printouts to Megan’s mother to show her what the girl was up to.

Not only was the idea stupid, it wasn’t even original—Sarah and Megan, back when they were friends, had done the same thing, creating a profile for a boy who didn’t exist as a way to talk to other boys. This time, however, the plan went awry. Megan became deeply infatuated with Josh. She pressed for his phone number. She wanted to meet him in person. The women behind his account looked for a way out.

According to Grills, “We decided to be mean to her so she would leave him alone . . . and we could get rid of the page.” After deliberating on the easiest way to end an ill-conceived hoax that was going very wrong, Grills sent an instant message to Meier: “The world would be a better place without you.”

The consequences were tragic. Meier, who was being treated for depression, took the suggestion all too literally. After an argument with her parents, who had closely monitored the relationship with Josh from the beginning, Meier went to her room and hanged herself.

Media accounts of the teen’s suicide and the subsequent revelation of who was behind “Josh” created a froth of outrage and hand-wringing. Commentators invented and then proclaimed an epidemic of “cyberbullying.”

When it became clear that the mother of one of Meier’s former friends was involved, Drew herself was subjected to death threats and vandalism. A fake MySpace page for her husband was created. On cable news and the blogosphere, Drew was instantly convicted and sentenced to hell. (“Call me vindictive,” a typical blog entry read, “but i hope that someone kills the woman who is responsible.”)

In the midst of the media storm, state attorneys in Missouri announced there would be no prosecution of Drew for the simple reason that no criminal law had been broken. Federal prosecutors weren’t so sure. They found a 1986 law, the Computer Fraud and Abuse Act, that set stiff penalties for breaking into and damaging computers.

Drew was charged under the novel theory that since the MySpace terms of service agreement prohibits posting false information in one’s profile, the creation of Josh violated Drew’s contract. Hence, she “accessed” MySpace computers without “authorization.” The creation of Josh, in other words, was a kind of hacking. The victim was not Meier (who with her parents’ permission had also violated the TOS, which requires users to be at least fourteen years old). The victim was MySpace.

Although the jury ultimately refused to convict Drew on the felony charge, they did convict her of the lesser crime of unauthorized access. Valentina Kunasz, the jury’s foreperson, made no apologies for the conviction. “It was so very childish; so very pathetic,” she told reporters after the trial. “She could have done quite a few things to stop it, and she chose not to. And I think she got kind of a rise out of doing this to another person and that bothers me, it really irks me.” Drew faces up to three years in prison and $300,000 in fines.

Legal scholars were generally in agreement that the prosecution was deeply flawed and will very likely be set aside or reversed on appeal. (N.B.  Later, it was.) First, there were gaping holes in the government’s case. For one thing, it was Grills, and not Drew, who set up the Josh account and therefore agreed to the TOS (Grills, testifying for the prosecution in exchange for immunity, admitted she never read the TOS). Drew herself was only occasionally involved in the hoax.

By a weird twist of irony, one of the few times she communicated with Meier it turned out she was talking to Meier’s mother, who told Josh he ought to be looking for friends his own age. The fateful message was sent by Grills without Drew’s knowledge, and wasn’t even sent through MySpace.

As a matter of public policy, the prosecution is even more disturbing. Even assuming Drew was bound by the TOS, these contracts are notoriously long and intentionally unreadable. Most of us, even lawyers, don’t read them.

Yet following the logic of the Drew prosecution, anyone who misrepresents some of their personal details on an online dating service has committed a federal crime. Anyone who gives a nonworking telephone number when signing up for a Web site has committed a federal crime.

Indeed, after the verdict, one social network researcher was pained to admit, “We’ve been telling our kids to lie about ID information for a long time now.”

The computer fraud law began as a protection against hackers targeting government computers. The law has never before been used in connection with the violation, willful or otherwise, of private terms of service. There’s no reason to believe Congress intended to criminalize cyberbullying in 1986 or any other time.

Supporters of the conviction argue that the real problem here was a hole in the law—the lack of a statute outlawing whatever it was Lori Drew had done.  But the decision of lawmakers not to criminalize a behavior is no reason to correct the problem in a way that undermines the very idea of law.

People are often cruel to each other. Other children, adults, and even parents can and do humiliate children in the real world. No laws, in all but extreme cases, are being broken.

It’s difficult to see how this case differs in any respect other than the use of a computer and the tragic outcome.

If the conviction stands, it effectively gives every federal prosecutor a blank check to charge anyone they want with criminal behavior, subject only to their discretion of whether and when to use that power.

Some commentators, pleased with the result if not the process, argued that there was no cause for alarm. Prosecutors, they said, will only use this power in extreme cases.

The Drew prosecution suggests precisely the opposite. For elected prosecutors in particular, the real temptation is to exercise discretion not when the law would otherwise let a heinous crime slip through the cracks but when passions are high and the facts (at least the version presented by the media) are the most lurid—when, in other words, an angry mob demands it.

Where to next for the FCC?

crossroads

Tuesday was a big day for the FCC.  The Senate Commerce, Science and Transportation Committee held an oversight hearing with all five Commissioners, the same day that reply comments were due on the design of eventual “incentive auctions” for over-the-air broadcast spectrum.  And the proposed merger of T-Mobile USA and MetroPCS was approved.

All this activity reflects the stark reality that the Commission stands at a crossroads.  As once-separate wired and wireless communications networks for voice, video, and data converge on the single IP standard, and as mobile users continue to demonstrate insatiable demand for bandwidth for new apps, the FCC can serve as midwife in the transition to next-generation networks.  Or, the agency can put on the blinkers and mechanically apply rules and regulations designed for a by-gone era.

FCC Chairman Julius Genachowski, for one, believes the agency is clearly on the side of the future.  In an op-ed last week in the Wall Street Journal, the Chairman took justifiable pride in the focus his agency has demonstrated in advancing America’s broadband advantage, particularly for mobile users.

Mobile broadband has clearly been a bright spot in an otherwise bleak economy.  Network providers and their investors, according to the FCC’s most recent analysis, have spent over a trillion dollars since 1996 building next-generation mobile networks, today based on 4G LTE technology.

These investments are essential for high-bandwidth smartphones and tablet devices and the remarkable ecosystem of voice, video, and data app they have enabled.  This platform for disruptive innovation has powered a level of “creative destruction” that would do Joseph Schumpeter proud.

Mobile disruptors, however, are entirely dependent on the continued availability of new radio spectrum.  In the first five years following the 2007 introduction of the iPhone, mobile data traffic increased 20,000%.  No surprise, then, that the FCC’s 2010 National Broadband Plan conservatively estimated that mobile consumers desperately needed an additional 300 MHz. of spectrum by 2015 and 500 MHz. by 2020.

With nearly all usable spectrum long-since allocated, the Plan acknowledged the need for creative new strategies for repurposing existing allocations to maximize the public interest.  But some current licensees including over-the-air television broadcasters and the federal government itself are resisting Chairman Genachowski’s efforts to keep the spectrum pipeline open and flowing.

So far, despite bold plans from the FCC for new unlicensed uses of TV “white spaces” and the  passage early in 2012 of “incentive auction” legislation from Congress, almost no new spectrum has been made available for mobile consumers.  The last significant auction the agency conducted was in 2008, based on capacity freed up in the digital television transition.

The “shared” spectrum the agency has recently been touting would have to be shared with the Department of Defense and other federal agencies, which have so far stonewalled a 2010 Executive Order from President Obama to vacate its unused or underutilized allocations.  (The federal government is, by far, the largest holder of usable spectrum today, with as much as 60% of the total.)

And after over a year of on-going design, there is still no timetable for the incentive auctions.  Last week, FCC Commissioner Jessica Rosenworcel, speaking to the National Association of Broadcasters, urged her colleagues at least to pencil in some dates.  But even in the best-case scenario, it will be years before significant new spectrum comes online for mobile devices.  The statute gives the agency until 2022.

In the interim, the mobile revolution has been kept alive by creative use of secondary markets, where mobile providers have bought and sold existing licenses to optimize current allocations, and by mergers and acquisitions, which allow network operators to combine spectrum and towers to improve coverage and efficiency.  Many transactions have been approved, but others have not.  Efforts to reallocate or reassign underutilized satellite spectrum are languishing in regulatory limbo.  Local zoning bodies continue to slow or refuse permission for the installation of new equipment.  Delays are endemic.

So even as the FCC pursues its visionary long-term plan for spectrum reform, the agency must redouble efforts to encourage optimal use of existing resources.  The agency and the Department of Justice must accelerate review of secondary market transactions, and place the immediate needs of mobile users ahead of hypothetical competitive harms that have yet to emerge.

In conducting the incentive auctions, unrelated conditions and pet projects need to be kept out of the mix, and qualified bidders must not be artificially limited to advance vague policy objectives that have previously spoiled some auctions and unnecessarily depressed prices on others.

Let’s hope today’s oversight hearing will hold Chairman Genachowski to his promise to “[keep] discussions focused on solving problems, and on facts and data….so that innovation, private investment and jobs follow.”  We badly need all three.

(A condensed version of this essay appears today in Roll Call.)

Disruptive Tecnologies and the Watchful Waiting Principle

When the smoke cleared and I found myself half caught-up on sleep, the information and sensory overload that was CES 2013 had ended.

There was a kind of split-personality to how I approached the event this year.  Monday through Wednesday was spent in conference tracks, most of all the excellent Innovation Policy Summit put together by the Consumer Electronics Association.  (Kudos again to Gary Shapiro, Michael Petricone and their team of logistics judo masters.)

The Summit has become an important annual event bringing together legislators, regulators, industry and advocates to help solidify the technology policy agenda for the coming year and, in this case, a new Congress.

I spent Thursday and Friday on the show floor, looking in particular for technologies that satisfy what I coined the The Law of Disruptionsocial, political, and economic systems change incrementally, but technology changes exponentially.

What I found, as I wrote in a long post-mortem for Forbes, is that such technologies are well-represented at CES, but are mostly found at the edges of the show–literally.

In small booths away from the mega-displays of the TV, automotive, smartphone, and computer vendors, in hospitality suites in nearby hotels, or even in sponsored and spontaneous hackathons going on around town, I found ample evidence of a new breed of innovation and innovators, whose efforts may yield nothing today or even in a year, but which could become sudden, overnight market disrupters.

Increasingly, it’s one or the other, which is saying something all by itself.  For one thing, how do incumbents compete with such all or nothing innovations?

That, however, is a subject for another day.

For now, consider again the policy implications of such dramatic transformations.  As those of us sitting in room N254 debated the finer points of software patents, IP transition, copyright reform, and the misapplication of antitrust law to fast-changing technology industries (increasingly, that means ALL industries), just a few feet away the real world was changing under our feet.

The policy conference was notably tranquil this year, without such previous hot-button topics as net neutrality, SOPA, or the lack of progress on spectrum reform to generate antagonism among the participants.  But as I wrote at the conclusion of last year’s Summit, at CES, the only law that really matters is Moore’s Law.  Technology gets faster, smaller, and cheaper, not just predictably but exponentially.

As a result, the contrast between what the regulators talk about and what the innovators do gets more dramatic every year, accentuating the figurative if not the literal distance between the policy Summit and the show floor.  I felt as if I had moved between two worlds, one that follows a dainty 19th century wind-up clock and the other that marks time using the Pebble watch, a fully-connected new timepiece funded entirely by Kickstarter.

The lesson for policymakers is sobering, and largely ignored.  Humility, caution, and a Hippocratic-like oath of first-do-no-harm are, ironically, the most useful things regulators can do if, as they repeat at shorter intervals, their true goal is to spur innovation, create jobs, and rescue American entrepreneurialism.

The new wisdom is simple, deceptively so.  Don’t intervene unless and until it’s clear that there is demonstrable harm to consumers (not competitors), that there’s a remedy for the harm that doesn’t make things, if only unintentionally, worse, and that the next batch of innovations won’t solve the problem more quickly and cheaply.

Or, as they say to new interns in the Emergency Room, “Don’t just do something.  Stand there.”

That’s a hard lesson to learn for those of us who think we’re actually surgical policy geniuses, only to find increasingly we’re working with blood-letting and leeches.  And no anesthesia.

In some ways, it’s the opposite of an approach that Adam Thierer calls the Technology Precautionary Principle.  Instead of panicking when new technologies raise new (but likely transient) issues, first try to let Moore’s Law sort it out, until and if it becomes crystal clear that it can’t.  Instead of a hasty response, opt for a delayed response.  Call it the Watchful Waiting Principle.

Not as much fun as fuming, ranting, and regulating at the first sign of chaos, of course, but far more helpful.

That, if anything, is the thread of my dispatches from Vegas, in any case:

  1. Telcos Race Toward an all-IP Future,” CNET
  2. At CES, Companies Large and Small Bash Broken Patent System, Forbes
  3. FCC, Stakeholders Align on Communications Policy—For Now,” CNET
  4. The Five Most Disruptive Technologies at CES 2013, Forbes

The FCC's Reign of Terror on Transaction Reviews

by Larry Downes and Geoffrey A. Manne

Now that the election is over, the Federal Communications Commission is returning to the important but painfully slow business of updating its spectrum management policies for the 21st century. That includes a process the agency started in September to formalize its dangerously unstructured role in reviewing mergers and other large transactions in the communications industry.

This followed growing concern about “mission creep” at the FCC, which, in deals such as those between Comcast and NBCUniversal, AT&T and T-Mobile USA, and Verizon Wireless and SpectrumCo, has repeatedly been caught with its thumb on the scales of what is supposed to be a balance between private markets and what the Communications Act refers to as the “public interest.”

Commission reviews of private transactions are only growing more common—and more problematic. The mobile revolution is severely testing the FCC’s increasingly anachronistic approach to assigning licenses for radio frequencies in the first place, putting pressure on carriers to use mergers and other secondary market deals to obtain the bandwidth needed to satisfy exploding customer demand.

While the Department of Justice reviews these transactions under antitrust law, the FCC has the final say on the transfer of any and all spectrum licenses. Increasingly, the agency is using that limited authority to restructure communications markets, beltway-style, elevating the appearance of increased competition over the substance of an increasingly dynamic, consumer-driven mobile market.

Given the very different speeds at which Silicon Valley and Washington operate, the expanding scope of FCC intervention is increasingly doing more harm than good.

 

Deteriorating Track Record

We’re trapped in a vicious cycle: the commission’s mismanagement of the public airwaves is creating more opportunities for the agency to insert itself into the internet ecosystem, largely to fix problems caused by the FCC in the first place. That is happening despite the fact that Congress clearly and precisely circumscribed the agency’s authority here, a key reason the internet has blossomed while heavily regulated over-the-air broadcasting and wireline telephone fade into history.

Desperate for continued relevance, the FCC can’t resist the temptation to tinker with one of the only segments of the economy that is still growing and investing. The agency, for example, fretted over Comcast’s merger with NBCUniversal for 10 months, approving it only after imposing a 30-page list of conditions, including details about which channels had to be offered in which cable packages.

Regulating-by-merger-condition has become a popular sport at the FCC, one with dangerous consequences. While it conveniently allows the agency to get around the problem of intervening where it has no authority, the result is a regulatory crazy quilt with different rules applying to different companies in different markets. Consumers, the supposed beneficiaries of this micromanagement, cannot be expected to understand the resulting chaos.

For example, Comcast also agreed to abide by an enhanced set of “net neutrality” rules even if, as appears likely, a federal appeals court throws out the FCC’s 2010 industry-wide rulemaking for exceeding the agency’s jurisdiction. As with all voluntary concessions, Comcast’s acquiescence isn’t reviewable in court.

The FCC made an even bigger hash in its review of AT&T’s proposed merger with T-Mobile. Once it became clear that the FCC was bowing to political pressure to reject the deal, the companies pulled their applications for license transfers to focus on winning over the Department of Justice first. But FCC Chairman Julius Genachowski, determined to have his say, simply released an uncirculated draft of the agency’s analysis of the deal anyway.

The report found that the combination, as initially proposed, would control too much spectrum in too many local markets. But that was only after the formula, known as the “spectrum screen,” was manipulated to reduce substantially the amount of frequency included in the denominator. Hidden in a footnote, the report noted cryptically that the reduction was being made (and explained) in an unrelated order yet to be published.

When the other order was released months later, however, it made no mention of the change. It never actually happened. With the T-Mobile deal off the table, apparently, the chairman found it more expedient to leave the screen as it was, at least until further gerrymandering proved useful. Unwittingly, Genachowski had exposed his hand in rigging a supposedly objective test applied by a supposedly independent agency.

 

Leave it to the Experts

This amateurish behavior, unfortunately, is increasingly the norm at the FCC. Politics aside, part of the problem is that while federal antitrust regulators enforce statutes under a long line of interpretive case law, the FCC’s review of license transfers is governed by an undefined and largely untested public interest standard.

Now the commission is asking interested parties how, if at all, it needs to formalize its transaction review process, particularly the spectrum screen calculation it blatantly manipulated in the AT&T/T-Mobile review. It’s even asking whether it should re-impose a rigid cap on the amount of spectrum any one carrier can license, a bludgeon of a regulatory tool the agency wisely abandoned in 2003.

We have a better idea. Do away with easily forged formulae and proxies with no scientific relevance. Instead, review transactions in the broader context of a dynamic broadband ecosystem that is disciplined not only by inter-carrier competition, but increasingly by device makers, operating system providers, app makers and ultimately by consumers.

Every user with an iPhone 5 knows perfectly well how complex and competitive the mobile marketplace has become. It’s now time for the government to abandon its 19th century toolkit and look at actual data—data that the FCC already collects and dutifully reports, then ignores when political expediency beckons.

Thanks to the FCC’s endemic misadventures in spectrum management, we can expect more, not fewer, mergers—necessitating more, not fewer, commission reviews. Rather than expanding the agency’s unstructured approach to transaction reviews, we should be reining it in. As the FCC embarks on its analysis of T-Mobile’s takeover of MetroPCS and Sprint’s acquisition by SoftBank, it’s time to put an end to dangerous mission creep at the FCC.

That, at least, would better serve the public interest.

(Reprinted, with permission, from Bloomberg BNA Daily Report for Executives, Dec. 6, 2012.  Our recent paper on FCC transaction review can be found at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2163169.)

The Latest Leak Makes Even Clearer UN Plans to Take Over Internet Governance

On Friday evening, I posted on CNET a detailed analysis of the most recent proposal to surface from the secretive upcoming World Conference on International Telecommunications, WCIT 12.  The conference will discuss updates to a 1988 UN treaty administered by the International Telecommunications Union, and throughout the year there have been reports that both governmental and non-governmental members of the ITU have been trying to use the rewrite to put the ITU squarely in the Internet business.

The Russian federation’s proposal, which was submitted to the ITU on Nov. 13th, would explicitly bring “IP-based Networks” under the auspices of the ITU, and would in specific substantially if not completely change the role of ICANN in overseeing domain names and IP addresses.

According to the proposal, “Member States shall have the sovereign right to manage the Internet within their national territory, as well as to manage national Internet domain names.”  And a second revision, also aimed straight at the heart of today’s multi-stakeholder process, reads:  “Member States shall have equal rights in the international allocation of Internet addressing and identification resources.”

Of course the Russian Federation, along with other repressive governments, uses every opportunity to gain control over the free flow of information, and sees the Internet as it’s most formidable enemy.  Earlier this year, Prime Minister Vladimir Putin told ITU Secretary-General Hamadoun Toure that Russia was keen on the idea of “establishing international control over the Internet using the monitoring and supervisory capability of the International Telecommunications Union.”

As I point out in the CNET piece, the ITU’s claims that WCIT has nothing to do with Internet governance and that the agency itself has no stake in expanding its jurisdiction ring more hollow all the time.  Days after receiving the Russian proposal, the ITU wrote in a post on its blog that, “There have not been any proposals calling for a change from the bottom-up multistakeholder model of Internet governance to an ITU-controlled model.”

This would appear to be an outright lie, and also a contradiction of an earlier acknowledgment by Dr. Touré.  In a September interview, Toure told Bloomberg BNA that “Internet Governance as we know it today,” concerns only “Domain Names and addresses.  These are issues that we’re not talking about at all,” Touré said. “We’re not pushing that, we don’t need to.”

The BNA article continues:

Touré, expanding on his emailed remarks, told BNA that the proposals that appear to involve the ITU in internet numbering and addressing were preliminary and subject to change.

‘These are preliminary proposals,’ he said, ‘and I suspect that someone else will bring another counterproposal to this, we will analyze it and say yes, this is going beyond, and we’ll stop it.’

Another tidbit from the BNA Interview that now seems ironic:

Touré disagreed with the suggestion that numerous proposals to add a new section 3.5 to the ITRs might have the effect of expanding the treaty to internet governance.

‘That is telecommunication numbering,’ he said, something that preceded the internet. Some people, Touré added, will hijack a country code and open a phone line for pornography. ‘These are the types of things we are talking about, and they came before the internet.’

I haven’t seen all of the proposals, of course, which are technically secret.   But the Russian proposal’s most outrageous proposals are contained in a proposed new section 3A, which is titled, “IP-based Networks.”

There’s more on the ITU’s subterfuge in Friday’s CNET piece, as well as these earlier posts:

1.  “Why is the UN Trying to Take Over the Internet?” Forbes.com, Aug 9, 2012.

2.  “UN Agency Reassures:  We Just Want to Break the Internet, Not Take it Over,” Forbes.com, Oct. 1, 2012.

What Google Fiber, Gig.U and US Ignite Teach us About the Painful Cost of Legacy Regulation

On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.

Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan.  Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .

Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals.  Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks.  And then go build a lot more of them.

Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year.  (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)

US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption.  It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.

I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem:  the U.S. is nearing a dangerous stalemate in its communications infrastructure.  We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband.  Right now, ultra high-speed broadband is technically possible by running fiber to the home.  Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.

But the kinds of visionary applications in smart grid, classroom-free education, advanced telemedicine, high-definition video, mobile backhaul and true teleworking that would make full use of a fiber network don’t really exist yet.  Consumers (and many businesses) aren’t demanding these speeds, and Wall Street isn’t especially interested in building ahead of demand.  There’s already plenty of dark fiber deployed, the legacy of earlier speculation that so far hasn’t paid off.

So the hope is that by deploying fiber to showcase communities and encouraging the development of demonstration applications, entrepreneurs and investors will get inspired to build next generation networks.

Let’s hope they’re right.

What interests me personally about the projects, however, is what they expose about regulatory disincentives that unnecessarily and perhaps fatally retard private investment in next-generation infrastructure.  In the Forbes piece, I note almost a dozen examples from the Google Fiber development agreement where Kansas City voluntarily waived permits, fees, and plodding processes that would otherwise delay the project.  As well, in several key areas the city actually commits to cooperate and collaborate with Google Fiber to expedite and promote the project.

As Levin notes, Kansas City isn’t offering any funding or general tax breaks to Google Fiber.  But the regulatory concessions, which implicitly acknowledge the heavy burden imposed on those who want to deploy new privately-funded infrastructure (many of them the legacy of the early days of cable TV deployments), may still be enough to “change the math,” as Levin puts it, making otherwise unprofitable investments justifiable after all.

Just removing some of the regulatory debris, in other words, might itself be enough to break the stalemate that makes building next generation IP networks unprofitable today.

The regulatory cost puts a heavy thumb on the side of the scale that discourages investment.  Indeed, as fellow Forbes contributor Elise Ackerman pointed out last week, Google has explicitly said that part of what made Kansas City attractive was the lack of excessive infrastructure regulation, and the willingness and ability of the city to waive or otherwise expedite the requirements that were on the books.(Despite the city’s promises to bend over backwards for the project, she notes, there have still been expensive regulatory delays that promoted no public values.)

Particularly painful to me was testimony by Google Vice President Milo Medin, who explained why none of the California-based proposals ever had a real chance.  “Many fine California city proposals for the Google Fiber project were ultimately passed over,” he told Congress, “in part because of the regulatory complexity here brought about by [the California Environmental Quality Act] and other rules. Other states have equivalent processes in place to protect the environment without causing such harm to business processes, and therefore create incentives for new services to be deployed there instead.”

Ouch.

This is a crucial insight.  Our next-generation communications infrastructure will surely come, when it does come, from private investment.  The National Broadband Plan estimated it would take $350 billion to get 100 Mbps Internet to 100 million Americans through a combination of fiber, cable, satellite and high-speed mobile networks.  Mindful of reality, however, the plan didn’t even bother to consider the possibility of full or even significant taxpayer funding to reach that goal.

Unlike South Korea, we aren’t geographically-small, with a largely urban population living in just a few cities.  We don’t have a largely- nationalized and taxpayer-subsidized communications infrastructure.   On a per-person basis, deploying broadband in the U.S. is much harder, complicated and more expensive than it is in many competing nations in the global economy.

Of course, nationwide fiber and mobile deployments by network operators including Verizon and AT&T can’t rely on gimmicks like Google Fiber’s hugely successful competition, where 1,100 communities applied to become a test site.  Nor can they, like Gig.U, cherry-pick research university towns, which have the most attractive demographics and density to start with.  Nor can they simply call themselves start-ups and negotiate the kind of freedom from regulation that Google and Gig.U’s membership can.

Large-scale network operators need to build, if not everywhere, than to an awful lot of somewheres.  That’s a political reality of their size and operating model, as well as the multi-layer regulatory environment in which they must operate.  And it’s a necessity of meeting the ambitious goal of near-universal high-speed broadband access, and of many of the applications that would use it.

Under the current regulatory and economic climate, large-scale fiber deployment has all but stopped for now.  Given the long lead-time for new construction, we need to find ways to restart it.

So everyone who agrees that gigabit Internet is a critical element in U.S. competitiveness in the next decade or so ought to look closely at the lessons, intended or otherwise, of the various testbed projects.  They are exposing in stark detail a dangerous and useless legacy of multi-level regulation that makes essential private infrastructure investment economically impossible.

Don’t get me wrong.  The demonstration projects and testbeds are great.  Google Fiber, Gig.U, and US Ignite are all valuable efforts.  But if we want to overcome our “strategic bandwidth deficit,” we’ll need something more fundamental than high-profile projects and demonstration applications.  To start with, we’ll need a serious housecleaning of legacy regulation at the federal, state, and local level.

Regulatory reform might not be as sexy as gigabit Internet demonstrations, but the latter ultimately won’t make much difference without the former.  Time to break out the heavy demolition equipment—for both.