Category Archives: Criminal

More on Protect IP Act – the Surprisngly Free Podcast

I’m pleased to be the guest this week on Jerry Brito’s “Surprisingly Free” podcast for the Mercatus Center at George Mason University.

Jerry and I talk about enforcing copyright and trademark law online, and in particular the dangers of the recently-introduced Protect IP Act.  Protect IP tries to solve the problem of foreign websites selling unlicensed or counterfeit goods, but the tools it offers are a poor match for the realities of the global network, and if passed would likely do more harm than good.

Later:  Alternatives better suited to the real, underlying problems of digital content.

What the Protect IP Act says about the current state of the Internet content wars

I’ve written two articles on the Protect IP Act of 2011, introduced last week by Sen. Leahy (D-Vt.).

For CNET, I look at some of the key differences, better and worse, between Protect IP and its predecessor last year, known as COICA.

On Forbes this morning, I have a long meditation on what Protect IP says about the current state of the Internet content wars.  Copyright, patent, and trademark are under siege from digital technology, and for now at least are clearly losing the arms race.

The new bill isn’t exactly the nuclear option in the fight between the media industries and everyone else, but it does signal increased desperation.

I’m not exactly a non-combatant here.  Increasingly, everyone is being dragged into this fight, including search engines, ISPs, advertisers, financial transaction processors, and, in Protect IP is passed, anyone who uses a hyperlink.

But as someone who earns his living from information exchanges–what the law anachronistically calls “intellectual property”–I’m not exactly an anarchist either (or as one recent commenter on CNET called me, a complete anarchist!).

The development of an information economy will stabilize and mature at some point, and, I believe, the new supply chain will be richer, more profitable, and give a greater share of the value than the current one does to those who actually create new content.  (Most of the cost of information products and services today is eaten up by middlemen, media, and distribution.)

But it’s not an especially smooth or predictable trajectory.  Joseph Schumpeter didn’t call it creative destruction for nothing.

 

Why no one will join the Global Network Initiative

I’ve posted a long article on Forbes.com this morning on the Global Network Initiative. A non-profit group aimed at improving human rights though the agency of information technology companies, GNI has never really gotten off the ground.

Since its formal launch in 2008, following two years of negotiations among tech companies, human rights groups and academics, not a single company has agreed to join beyond the original members–Google, Yahoo and Microsoft.

This despite considerable pressure from supporters of GNI, including Senator Richard Durbin (D-IL), Chair of the Senate Judiciary’s Subcommittee on Human Rights. Indeed, in the wake of uprisings in Tunisia, Egypt, Libya and elsewhere and the seminal role played by social media and other IT, a full-court press has been launched against Facebook and Twitter in particular for failing to sign up. Continue reading

Doing Nothing to Save the Internet

My essay last week for Slate.com (the title I proposed is above, but it must have been too “punny” for the editors) generated a lot of feedback, for which I’m always grateful, even when it’s hostile and ad hominem.  Which much of it was.

The piece argues generally that when it comes to the Internet, a disruptive technology if ever there was one, the best course of action for traditional, terrestrial governments intent on “saving” or otherwise regulating digital life is to try as much as possible to restrain themselves.  Or as they say to new interns in the operating room, “Don’t just do something.  Stand there.”

This is not an argument in favor of anarchy, or even more generally for social Darwinism.  I have something much more practical in mind.  Disruptive technologies, by definition, do not operate within the “normal science” of those areas of life they impact. Its problems can’t be solved by reference to existing systems and institutions. In the case of the Internet, that’s pretty much all aspects of life, including regulation.

By design, modern democratic government is deliberative, incremental, and slow to change.  That is an appropriate model for regulating traditional areas including property, torts, criminal procedure, civil rights and business law.    But when applied to a new ecosystem—to a new frontier, as I suggest in the piece—that model doesn’t work.

Digital life is changing much faster than traditional regulators can hope to keep up with.  It isn’t just an interesting business use of information anymore, it’s a social phenomenon, one that has gone far beyond companies finding more effective ways to share data.  It’s also, increasingly, a global phenomenon, a poor match for local and even national lawmaking.

Digital life moves at the speed of Moore’s Law, and that is the source of its true regulation.  The Internet—acting through its engineers, its users, and its enterprises–governs itself and, while far from perfect, certainly seems to be doing a better job than traditional governments in their traditional venues, let alone online.

The piece gives a short quote from Frederick Jackson Turner, the groundbreaking historian of the American West.  The full quote gives additional context to my frontier analogy:

The policy of the United States in dealing with its land is in sharp contrast with the European system of scientific administration.  Efforts to make this domain a source of revenue, and to withhold it from emigrants in order that settlement might be compact, were in vain.  The jealousy and fears of the East were powerless in the face of the demands of the frontiersman.  John Quincy Adams was obliged to confess:  “My own system of administration, which was to make the national domain the inexhaustible fund for progressive and unceasing internal improvement, has failed.”  The reason is obvious:  a system of administration was not what the West demanded:  it wanted land.

A few key points from this passage are worth highlighting:

1.      Parochialism – Traditional governments attempting to regulate new and disruptive technologies rarely have the best interests of the users in mind.  Instead, they try to exploit the new ecosystem, at best, as a stalking horse for regulation they could get away with in traditional contexts but hope to foist off on the more poorly-organized inhabitants of the frontier.  At worst, governments captured by the vested interests most threatened by the disruption of the new technology attempt to slow down the pace of change, to preserve the interests of those in the process of being upended.

That’s in part why, despite increasingly desperate efforts by the East to impose its regulatory will on the West, those efforts failed.  The East was interested in exploiting western lands for their own benefit, not optimizing the West’s potential to create a new kind of society and economic system.  The East was working against the momentum of transformation.  It understood little of how frontier life was evolving, and its laws couldn’t keep up with the pace of change even if they were enforceable, which they weren’t.  Nor should they have been.

One need only look to one of the first U.S. efforts to regulate the Internet for an example of the first kind of lawmaking.  The Communications Decency Act, passed in 1996 and signed by President Clinton, banned classes of content on the Internet that were perfectly legal in the U.S. in any other media.  (Similar bans have been enacted, often with more bite or more focused morality, in other counties, including Thailand, Pakistan, China, the E.U., and others.)

That law, and subsequent efforts to impose an antediluvian morality on U.S. Internet users, was summarily tossed out by the U.S. Supreme Court as a facial violation of the First Amendment.  Its passage inspired John Perry Barlow to issue his famous “Declaration of the Independence of Cyberspace,” which pointed out correctly that traditional governments have anything but the best interests of this new environment in mind when they put pen to paper.

As an example of regulation to protect vested (and obsoleting) interests, consider the 1998 Digital Millennium Copyright Act, in which content owners unwilling or unable to adapt to the new physics of digital distribution, convinced their lawmakers to impose brutally restrictive new limits on digital technologies.  They bought themselves far greater protection from reverse engineering, fair use, and the First Sale doctrine than they had achieved in the real world.

Whether those protections are enforceable, or whether they used the time it bought them to get ready for a more orderly transition to digital life, remain to be seen.  But the prospects are predictably poor.  Just ask Pope Urban VIII, who banned Galileo’s insistence that the Earth revolved around the Sun.  No matter how long Galileo stayed in prison, the orbits didn’t change.

Indeed, it’s hard without doing an exhaustive survey to think of a single piece of traditional law aimed at helping or saving the Internet that wasn’t at best naïve and at worse intentionally harmful–including laws that grant law enforcement more powers online than they have in their native territory.  That’s why I’m surprised when some of my fellow frontiersman short-sightedly rush back to Washington at the first sign of trouble with Native populations, or with saloon-keepers, or with the railroads, or with any other participant in the ecosystem who isn’t living up to their standards.  They should know that it’s both dangerous, and pointless, to do so.

2.      Impotence – In some sense, in other words, it doesn’t matter whether terrestrial governments regulate or not.  We have ample evidence – file-sharing, spam, political dissent, porn, gambling–that even those activities that have been banned go on without much regard for the legal consequences.  The government of Egypt (and Burma, and Pakistan, and China) can shut down Internet access for a short or for a long period of time.  But the disruption in service is a mere blink in the eye in Internet time.  Let’s see who wins the stand-off that ensues, and how quickly the Law of Disruption takes hold.  Bets gladly accepted here.

As Barlow wrote in his Declaration, “You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”  Put another way, in nearly every conflict between Moore’s Law and traditional law, Moore’s Law wins.  Digital life will make its own “social contract” whether traditional governments give it permission to or not.

3.      Reverse engineering government – To repeat, the absence or ineffectiveness of traditional regulators in digital life does not translate to anarchy and chaos.  There is a social contract to online life, and it will be followed by more organized and organic forms of governance.  As I wrote in the piece, “the posse and the hanging tree gave way to local sheriffs and circuit-riding judges.”

That does not mean, however, that over time the old forms of government and regulation will finally win the battle and establish their norms on digital life.  Quite the opposite.  What has been and will continue to develop are forms of online governance that are suited to the unique environmental properties of digital life.

For now, we can already see that the new institutions will be more democratic–more directly democratic—for better and for worse.  (As Madison said, “If every Athenian had been a Solon, every Greek Assembly would still have been a mob.”)  Watch how the users of Facebook, Twitter, YouTube, World of Warcraft, iTunes, and Android respond to efforts by the sovereigns of these domains to dictate the terms of the social contract, and you’ll see how the new social contract is being worked out.

There’s more.  Turner points out that the organic forms of governance that emerged from the American West didn’t simply create a new form of frontier law.  It created American law.  Once the global inhabitants of digital life work out their rules and enforcement mechanisms, in other words, they are unlikely to settle for a system any less efficient back on terra firma.  Turner writes, “Steadily, the frontier of settlement advanced and carried with it individualism, democracy, and nationalism, and powerfully affected the East and the Old World.”

Who will impose their collective will on whom, and which form of government will become obsolete?  Again, anyone care to place a wager?

This is starting to sound like the outline of something much longer.  So I’ll stop there.

Congress's Tech Agenda: Something Old, Something Older

I reported for CNET yesterday on highlights from the State of The Net 2011 conference, sponsored by the Advisory Committee to the Congressional Internet Caucus.  Though I didn’t attend last year’s event, I suspect much of the conversation hasn’t changed.

For an event that took place nearly a month after the FCC’s “final” vote on net neutrality, the issue seems not to have quieted down in the least.  A fiery speech from Congresswoman Martha Blackburn promised a “Congressional hurricane” in response to the FCC’s perceived ultra vires decision to regulate where Congress has refused to give it authority, a view supported by House and Senate counsel who spoke later in the day.

There seemed to be agreement from Republicans and Democrats that undoing the Open Internet Report and Order was the Republicans’ top priority on the tech agenda.  Blackburn has already introduced a bill, with at least one Democratic co-sponsor, to make clear (clearer?) that the FCC has no authority to regulate any Internet activity.  And everyone agreed that the Republicans would move forward with a resolution of disapproval under the Congressional Review Act, and that the resolution would pass the House and probably the Senate.  (Such resolutions are filibuster-proof, so Senate Republicans would need only a few Democrats.)

House Energy and Commerce senior counsel Neil Fried had mentioned the CRA resolution at CES a few weeks ago.  But now it’s been upgraded from a possibility to a likelihood.

The disagreement comes over whether President Obama would veto the resolution. Speculating in a vacuum, as many participants did, doesn’t really help.   The answer will ultimately depend on what other horse trading is in progress at the time.  (See:  tax cuts, health care, etc.)  Much as those of us who follow net neutrality may think it’s the center of the political universe, the reality is that it could easily become a bargaining chip.

That’s especially so given that almost no one was happy with the rules as they were finally approved.   Among advocates, opponents, and even among the five FCC Commissioners, only Chairman Genachowski had any enthusiasm for Order.  (He may be the only enthusiast, full stop.  On a panel on which I participated on the second day, advocates for net neutrality were tepid in their support of the Order or its prospects in court.  I think tepid is being generous.)

And everyone agreed that there would be legal challenges based on the FCC’s dubious statutory authority.  Amy Schatz of the Wall Street Journal said she knew of several lawyers in town shopping for friendly courts, and that pro-regulation advocates may themselves challenge the rule.  Timing could be important, or not.

Beyond net neutrality, which seems likely to dominate the tech agenda for the first six months of the new Congress, bi-partisan words were flung over the need to resolve the imminent (arrived?) “spectrum crisis,” and to reform the bloated and creaky Universal Service Fund.  These, it’s worth remembering, were two of the top priorities from last year’s National Broadband Plan, which sadly disappeared into the memory hole soon after publication.

Other possible agenda items I heard over the course of the two day event, but much farther down the list:  revival of COICA (giving DHS new powers to seize domains used for trademark and copyright violations), privacy, cloud computing, cybersecurity, ECPA reform, retransmission, inter-carrier compensation, and Comcast/NBC merger.  I missed a few panels, so I’m sure there was more.

What are the chances any of these conversations will actually generate new law?  Anybody?

“Fake Neutrality” or Government Takeover?: Reading the FCC’s Net Neutrality Report (Part III)

In Part I of this analysis of the FCC’s Report and Order on “Preserving the Open Internet,” I reviewed the Commission’s justification for regulating broadband providers.   In Part II, I looked at the likely costs of the order, in particular the hidden costs of enforcement.  In this part, I compare the text of the final rules with earlier versions.  Next, I’ll look at some of the exceptions and caveats to the rules—and what they say about the true purpose of the regulations.

In the end, the FCC voted to approve three new rules that apply to broadband Internet providers.  One (§8.3) requires broadband access providers to disclose their network management practices to consumers.  The second One (§8.4) prohibits blocking of content, applications, services, and non-harmful devices.  The third One (§8.5) forbids fixed broadband providers (cable and telephone, e.g.) from “unreasonable” discrimination in transmitting lawful network traffic to a consumer.

There has of course been a great deal of commentary and criticism of the final rules, much of it reaching fevered pitch before the text was even made public.  At one extreme, advocates for stronger rules have rejected the new rules as meaningless, as “fake net neutrality,” “non neutrality,” or the latest evidence that the FCC has been captured by the industries it regulates.  On the other end, critics decry the new rules as a government takeover of the Internet, censorship, and a dangerous and unnecessary interference with a healthy digital economy.  (I agree with that last one.)

One thing that has not been seriously discussed, however, is just how little the final text differs from the rules originally proposed by the FCC in October, 2009.  Indeed, many of those critical of the weakness of the final rules seem to forget their enthusiasm for the initial draft, which in key respects has not changed at all in the intervening year of comments, conferences, hearings, and litigation.

The differences—significant and trivial—that have been made can largely be traced to comments the FCC received on the original draft, as well as interim proposals made by industry and Congress, particularly the framework offered by Verizon and Google in August and a bill circulated by Rep. Henry Waxman just before the mid-term elections.

1.      Transparency

Compare, for example, the final text of the transparency rule with the version first proposed by the FCC.

Subject to reasonable network management, a provider of broadband Internet access service must disclose such information as is reasonably required for users and content, application and service providers to enjoy the protections specified in this part. (Proposed)

A person engaged in the provision of broadband Internet access service shall publicly disclose accurate information regarding the network management practices, performance and commercial terms of its broadband Internet access service sufficient for consumers to make informed choices regarding use of such services and for content, application, service and device providers to develop, market and maintain Internet offerings. (Final)

The final rule is much stronger, and makes clearer what it is that must be disclosed.  It is also not subject to the limits of reasonable network management,  Rather than the vague requirement of the draft for disclosures sufficient to “enjoy the protections” of the open Internet rules, the final rule requires disclosures sufficient for consumers to make “informed choices” about the services they pay for, a standard more easily enforced.

By comparison, the final rule comes close to the version that appeared in draft legislation circulated but never introduced by Rep. Henry Waxman in October of 2010. It likewise reflects the key concepts in the Verizon-Google Legislative Framework Proposal from earlier in the year.

As the Report makes clear (¶¶ 53-61), the transparency rule has teeth.  Though the agency declines for now from making specific decisions about the contents of the disclosure and how is must be posted, the Report lays out a non-exhaustive list of nine major categories of disclosure, including network practices, performance characteristics, and commercial terms, that must be included.  It’s hard to imagine a complying disclosure that will not run to several pages of very small text.

That generosity, of course, may be the rule’s undoing.  As anyone who has ever thrown away a required disclosure from a service provider (mortgage, bank, drug, electronic device, financial statement, privacy, etc.) knows full well, information “sufficient” to make an informed choice is far more information than any non-expert consumer could possibly absorb and evaluate, even if they wanted to.   The more information consumers are given, the less likely they’ll pay attention to any of it, including what may be important.

The FCC recognizes that risk, however, but believes it has an answer.  “A key purpose of the transparency rule,” the Commission notes (¶ 60), “is to enable third-party experts such as independent engineers and consumer watchdogs to monitor and evaluate network management practices, in order to surface concerns regarding potential open Internet violations.”

Perhaps the agency has in mind here organizations like BITAG, which has been established by a wide coalition of participants in the Internet ecosystem to develop “consensus on broadband network management practices or other related technical issues.”  Or by consumer watchdogs, perhaps the agency imagines that some of the public interest groups who have most strenuously rallied for the rules will become responsible stewards of their implementation, trading the acid pens of political rhetoric for responsible analysis and advocacy to their members and other consumers.

We’ll see.  I wish I shared the Commissions confidence that, “for a number of reasons” (none cited), “the costs of the disclosure rule we adopt today are outweighed by the benefits of empowering end users and edge providers to make informed choices….”  (¶ 59).  But I don’t. Onward.

2.       Blocking

The final version of the blocking rule (§8.5) consolidated the Content, Applications and Services and Devices rule of the original draft.  The final rule states:

A person engaged in the provision of fixed broadband Internet access services, insofar as such person is so engaged, shall not block lawful content, applications, services or non-harmful devices, subject to reasonable network management.

A more limited rule applies to mobile broadband providers, who

[S]hall not block consumers from accessing lawful websites, subject to reasonable network management, nor shall such person block applications that compete with the providers’ voice or video telephony services, subject to reasonable network management

Much of the anguish over the final rules that has been published so far relates to a few of the limitations built into the blocking rule.  First, copyright-reform activists object to the word “lawful” appearing in the rule.  “Lawful” content, applications, and services do not include activities that constitute copyright and trademark infringement.  Therefore, the rule allows broadband providers to use whatever mechanisms they want (or may be required to) to reduce or eliminate traffic that involves illegal fire-sharing, spam, viruses and other malware, and the like.

A provider who blocks access to a site selling unlicensed products, in other words, is not violating the rules.  And as the agency finds it is “generally preferable to neither require not encourage broadband providers to examine Internet traffic in order to discern which traffic is subject to the rules” (¶ 48), there will be considerable margin of error given to providers who block sites, services, or applications which may include some legal components.

On this view, though the FCC otherwise contradicts it—see footnote 245 and elsewhere—a complete ban on the BitTorrent protocol, for better or worse, might not be a violation of the blocking rule.  Academic studies have shown that over 99% of BitTorrent traffic constitutes unlicensed file sharing of protected content.  Other than inspecting individual torrents, which the agency disfavors, how else can an access provider determine what tiny minority of BitTorrent traffic is in fact lawful?

A second concern is the repeated caveat for “reasonable network management,” which gives access providers leeway to balance traffic during peak times, limit users whose activity may be harming other users, and other “legitimate network management” purposes.

Finally, disappointed advocates object to the special treatment for mobile broadband, which may, for example, block applications, services or devices without violating the rule.  There is an exception to the exception for applications, such as VoIP and web video, that compete with the provider’s own offerings, but that special treatment doesn’t keep mobile providers from using “app stores” to exclude services they don’t approve.  (See ¶ 102)

Of course even the original draft of the rules included the limitation for “reasonable network management,” and refused to apply any of the rules to unlawful activities.  The definition of “reasonable network management” in the original draft is different, but functionally equivalent, to the final version.

The carve-out for mobile broadband, however, is indeed a departure from the original rules.  Though the Oct. 2009 Notice of Proposed Rulemaking expressed concern about applying the same rule to fixed and mobile broadband (see  13, 154-174), the draft blocking rule did not distinguish between fixed and mobile Internet access.  The FCC did note, however, that different technologies “may require differences in how, to what extent, and when the principles apply.”  The agency sought comment on these differences (and asked for further comment in a later Notice of Inquiry).  Needless to say, they heard plenty.

Wireless broadband is, of course, a newer technology, and one still very much in development.  Spectrum is limited, and capacity cannot easily be added.  Those are not so much market failures as they are regulatory failures.  The FCC is itself responsible for managing the limited radio spectrum, and has struggled by its own admission to allocate spectrum for its most efficient and productive uses—indeed, even to develop a complete inventory of who has which frequencies of licensed spectrum today.

Adding additional capacity is another regulatory obstacle.  Though mobile users rail against their providers for inadequate or unreliable coverage, no one, it seems, wants to have cellular towers and other equipment near where they live.  Local regulators, who must approve new infrastructure investments, take such concerns very much to heart.  (There is also rampant corruption and waste in the application, franchising, and oversight processes at the state and local levels, a not-very-secret secret.)

The FCC, it seems, has taken these concerns into account in the final rule.  Its original open Internet policy statements—from which the rules derive—applied only to fixed broadband access, and the October, 2009 draft’s inclusion of mobile broadband came as a surprise to many.

The first indication that the agency was considering a return to the original open Internet policy came with the Verizon-Google proposal, where the former net neutrality adversaries jointly released a legislative framework (that is, something they hoped Congress, not the FCC, would take seriously) that gave different treatment to mobile.  As the V-G proposal noted, “Because of the unique technical and operational characteristics of wireless networks, and the competitive and still-developing nature of wireless broadband services, only the transparency principle would apply to wireless at this time.”

The Waxman proposal didn’t go as far as V-G, however, adding a provision that closely tracks with the final rule.  Under the Waxman bill, mobile providers would have been prohibited from blocking “lawful Internet websites”, and applications “that compete with the providers’ voice or video communications services.”

So the trajectory of the specialized treatment for mobile broadband is at least clear and, for those following the drama, entirely predictable.  Yet the strongest objections to the final rule and the loudest cries of betrayal from neutrality advocates came from the decision to burden mobile providers less than their fixed counterparts.  (Many providers offer both, of course, so will be subject to different rules for different parts of their service.)

At the very least, the advocates should have seen it coming.  Many did.  A number of “advocacy” groups demonized Google for its cooperation with Verizon, and refused to support Waxman’s bill.  (It should also be noted that none of the groups objecting to the final rules or any interim version ever actually proposed their own version—that is, what they actually wanted as opposed to what they didn’t want.)

3.      Unreasonable discrimination

The final rule, applicable only to fixed broadband providers, demands that a provider not “unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.”  (§ 8.7, and see ¶¶ 68-79 of the Report).

Though subtle, the difference in language between the NPRM and the final rule are significant, as the FCC acknowledges.  The NPRM draft rule noted plainly that “a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.”

The difference here is between “nondiscrimination,” which prohibits all forms of differential network treatment, and “unreasonable discrimination,” which allows discrimination so long as it is reasonable.

The migration from a strict nondiscrimination rule (subject, however, to reasonable network management) to a rule against “unreasonable” discrimination can be seen in the interim documents.  The Verizon-Google proposal, which called for a “Non-Discrimination Requirement,” nonetheless worded the requirement to ban only “undue discrimination against lawful Internet content, application, or service in a manner that causes meaningful harm to competition or to users.” (emphasis added)

Rep. Waxman’s draft bill, likewise, would have applied a somewhat different standard for wireline providers, who “shall not unjustly or unreasonably discriminate in transmitting lawful traffic over a consumer’s wireline broadband Internet access service,” also subject to reasonable network management.

Over time, the FCC recognized the error of its original draft and now agrees “with the diverse group of commenters who argue that any nondiscrimination rule should prohibit only unreasonable discrimination.” (¶ 77)

As between the suggested limiting terms “undue,” “unjust” and “unreasonable,” the FCC chose the latter for the final rule.  Though many have complained that “unreasonable” is a nebulous, subjective term, it should be noted that of the three it is the only one with understood (if not entirely clear) legal meaning, particularly in the context of the FCC’s long history of rulemaking and adjudication.

The earliest railroad regulations, for example, which also provided the beginning of the FCC’s eventual creation and authority over communications industries, required reasonable rates of carriage, and empowered the Interstate Commerce Commission to intervene and eventually set the rates itself, much as the FCC later did with telephony.

One lesson of the railroad and telephone histories, however, is the danger of turning over to regulators decisions about what behaviors are reasonable. (Briefly, regulatory capture often ends up leaving the industry unable to respond to new forms of competition from disruptive technologies, with disastrous consequences.)

The V-G proposal gets to the heart of the problem in the text I italicized.  Despite the negative connotations of the word in common use, “discrimination” isn’t inherently bad. As the Report makes clear, in managing Internet access and network traffic, there are many forms of discrimination—which means, after all, affording different treatment to different things—that are entirely beneficial to overall network behavior and to the consumer’s experience with the Internet.

The draft rule, as the FCC now admits (see ¶ 77 of the Report), was dangerously rigid.  If any behavior should be regulated, it is the kind of discrimination whose principal purpose is to harm competition or users—though that kind of behavior is already illegal under various antitrust laws.

For one thing, users may want some kinds of traffic – e.g., voice and video – to receive a higher priority over text and graphics, which do not suffer from latency problems.  Companies operating Virtual Private Networks for their employees may likewise want to limit Web access to selected sites and activities for workers while on the job.

A strict nondiscrimination rule would have also discouraged or perhaps banned tiered pricing, harming consumers who do not need the fastest speeds and the highest volume of downloads to accomplish what the want to online.  (Without tiered pricing, such consumers effectively subsidize power users who, not surprisingly, are the most vociferous objectors to tiered pricing.)

Discrimination may also be necessary to manage congestion during peak usage periods or when failing nodes put pressure on the backbone.  Discrimination against spam, viruses and other malware, much of which is not “lawful,” is also permitted and indeed encouraged.  (See ¶ 90-92.)

By comparison, the Report notes three (¶ 75) types of provider discrimination that are of particular concern.  These are:  discrimination that harms competitors (e.g., VoIP providers of over-the-top telephone service, such as Skype or Vonage, that competes with the provider’s own telephone service), “inhibiting” end users from accessing content, services, and applications of their choice (but see the no-blocking rule, above, which already covers this), and discrimination that “impairs free expression,” including slowing or blocking access to a blog whose message the broadband provider does not approve.

On that last point, however, it’s important to note that Congress has already given broadband providers (and others) broad freedom to filter and otherwise curate content they do not approve of or which they believe their customers don’t want to see.  Under Section 230 of the Communications Decency Act,

“No provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

The goal of Section 230 was to immunize early Internet providers including CompuServe and Prodigy from efforts to exercise editorial control over message boards whose content was provided by customers themselves.  But it gives providers broad discretion in determining what kind of content it believes its customers don’t want to see.  So long as the filtering is undertaken in “good faith” (e.g., not with the intent of harming a competitor), there is no liability for the provider, who does not, for example, become a “publisher” for purposes of defamation law.

The FCC (¶ 89) acknowledges the limit that Section 230 puts on the discrimination rule.

On the harm to competitors prong, the FCC waffles (see ¶ 76) on whether “pay for priority”—the bugaboo that launched the neutrality offensive in the first place, actually constitutes a violation of the rules.  While a broadband provider’s offering to prioritize the traffic of a particular source for a premium fee “would raise significant cause for concern,” the agency acknowledges that such behavior has occurred and thrived for years in the form of third party Content Delivery Networks.  (See footnote 236)  CDNs are allowed.   (More on CDNs in the next post.)

So in the end the discrimination rule doesn’t appear to add much to the blocking rule or existing antitrust laws.  Discrimination against competing over-the-top providers would violate antitrust.  Blocking or slowing access to disfavored content is already subject to the blocking rule.  And interfering with “free expression” rights of users is already significantly allowed by Section 230.

What’s left?   “The rule rests on the general proposition,” the agency concludes (¶ 78), “that broadband providers should not pick winners and losers on the Internet,” even when doing so is independent of competitive interests.  What exactly this means—and how “reasonable” discrimination will be judged in the course of enforcing the rules—remains to be seen.

Next:  The exceptions and what they say about the real purpose of the rules