Category Archives: Digital Life

The Italian Job: What the Google Convictions are Really About

I was pleased to be interviewed last night on BBC America World News (live!) about the convictions of three senior Google executives by an Italian court for privacy violations.  The case involved a video uploaded to Google Videos (before the acquisition of YouTube) that showed the bullying of a person with disabilities. (See “Larger Threat is Seen in Google Case” by the New York Times’ Rachel Donadio for the details.)

Internet commentators were up-in-arms about the conviction, which can’t possibly be reconciled with European law or common sense.  The convictions won’t survive appeals, and the government knows that as well as anyone.  They neither want to or intend to win this case.  If they did, it would mean the end of the Internet in Italy, if nothing else. Still, the case is worth worrying about, for reasons I’ll make clear in a moment.

But let’s consider the merits of the prosecution. Prosecutors bring criminal actions because they want to change behavior—behavior of the defendant and, more important given the limited resources of the government, others like him.  What behavior did the government want to change here?

The video was posted by a third party. Within a few months, the Italian government reported to Google their belief that it violated the privacy rights of the bullying victim, and Google took it down. They cooperated in helping the government identify who had posted it, which in turn led to the bullies themselves.

The only thing the company did not do was to screen the video before posting it. The Google executives convicted in absentia had no personal involvement in the video. They are being sued for what they did not do, and did not do personally.

So if the prosecution stands, it leads to a new rule for third-party content: to avoid criminal liability, company executives must personally ensure that no hosted content violates the rights of any third party.

In the future, the only thing employees of Internet hosting services of all kinds could do to avoid criminal prosecution would be to pre-screen all user content before putting it on their website.  And pre-screen them for what?  Any possible violation of any possible rights.  So not only would they have to review the contents with an eye toward the laws of every possible jurisdiction, but they would also need to obtain releases from everyone involved, and to ensure those releases were legally binding. For starters.

It’s unlikely that such filtering could be done in an automated fashion. It is true that YouTube, for example, filters user postings for copyright violations, but that is only because the copyright holders give them reference files that can be compared. The only instruction this conviction communicates to service providers is “don’t violate any rights.” You can’t filter for that!

The prosecutor’s position in this case is that criminal liability is strict—that is, that it attaches even to third parties who do nothing beyond hosting the content.

If that were the rule, there would of course be no Internet as we know it. No company could possibly afford to take that level of precaution, particularly not for a service that is largely or entirely free to users. The alternative is to risk prison for any and all employees of the company.

(The Google execs got sentences of six months in prison each, but they won’t serve them no matter how the case comes out. In Italy, sentences of less than three years are automatically suspended.)

And of course that isn’t the rule.  Both the U.S. and the E.U. wisely grant immunity to services that simply host user content, whether it’s videos, photos, blogs, websites, ads, reviews, or comments. That immunity has been settled law in the U.S. since 1996 and the E.U. since 2000. Without that immunity, we simply wouldn’t have–for better or worse–YouTube, Flickr, MySpace, Twitter, Facebook, Craigslist, eBay, blogs, user reviews, comments on articles or other postings, feedback, etc.

(The immunity law, as I write in Law Five of “The Laws of Disruption,” is one of the best examples of the kind of regulating that encourages rather than interferes with emerging technologies and the new forms of interaction they enable.)

Once a hosting service becomes aware of a possible infringement of rights, to preserve immunity most jurisdictions require a reasonable investigation and (assuming there is merit to the complaint), removal of the offending content. That, for example, is the “notice and takedown” regime in the U.S. for content that violates copyright.

The government in this case knows the rule as well as anyone.  This prosecution is entirely cynical—the government neither wants to nor intends to win on appeal.  It was brought to give the appearance of doing something in response to the disturbing contents of the video (the actual perpetrators and the actual poster have already been dealt with). Google in this sense is an easy target, and a safe one in that the company will vigorously fight the convictions until the madness ends.

And not unrelated, it underscores a message the Italian government has been sending any way it can to those forms of media it doesn’t already control—that it will use whatever means at its disposal, including the courts, to intimidate sources it can’t yet regulate.

So in the end it isn’t a case about liability on the Internet so much as a case about the power of new media to challenge governments that aren’t especially interested in free speech.

Internet pundits are right to be outraged and disturbed by the audacious behavior of the government. But they should be more concerned about what this case says about freedom of the press in Italy and less what it says about the future of liability for content hosts.

And what it says about the Internet as a powerful, emerging form of communication that can’t easily be intimidated.

Note to eBay: A Chink in the Amazon Armor?

I don’t usually blog “personal” stories, but this one is irresistible.  It raises disturbing questions at the border of digital and physical life, and legal problems of trademark and the emerging issues of cloud computing and data liability.

EBay, as everyone knows, has been struggling to improve its customer experience in the light of disappointing results in the last few years. One problem in particular that the company has worked hard to address is the problem of sellers who either misrepresent their items or otherwise underperform in the transaction, tarnishing the image of eBay in the process.

There are of course legal consequences to some of these problems as well. EBay has been the subject of numerous lawsuits in the U.S. and abroad from trademark holders claiming that eBay sellers are offering knock-off or forged goods as branded merchandise, or selling items outside the often-strict terms under which authorized merchants may sell branded goods. (For example, selling outside assigned geographic territory, or selling below the authorized price or terms.)

I’ve written extensively about the eBay litigation, including lawsuits brought by Tiffany in the U.S. and the Louis Vuitton brands in France. The question in these cases comes down to a definition of what eBay actually “is”—a department store responsible for the merchandise sold on its premises (liable) or a community bulletin board offered as a convenience to connect buyers and sellers of a variety of unrelated products and services (not liable).

EBay is neither of these things—it is an example of a new kind of virtual marketplace enabled by digital technology. But the law here, as elsewhere, has not kept up with the changing realities of digital life, leaving judges to struggle with analogies that just don’t fit. EBay has scored strong victories in the U.S., and significant losses abroad. Whatever the results in these cases, the legal reasoning is always hopeless and the opinions useless as precedent. The law evolves slowly.

(In a new twist, just the other week eBay was ordered to pay over $300,000 by a French court in another dispute with Louis Vuitton. This one involved eBay’s practice of purchasing advertising keywords that were common misspellings of LVMH marks that directed searches to eBay. EBay is appealing.)

Amazon’s third-party Marketplace, which has eaten into eBay’s market significantly over the years, has largely avoided these public legal skirmishes. Brand holders and their distributors may prefer to sell through Amazon than eBay, giving an incentive not to litigate when problems do arise. Amazon also manages a much smaller and generally more professional group of third party merchants than does eBay and, it appears, exercises more vigorous policing over the items that appear under the Amazon banner but which in fact are sold and distributed by third parties.

Well, maybe not. Recently I purchased a replacement camera battery from an Amazon third party merchant. (You can find the listing here, though I strongly suspect it will be changed or disabled very shortly for reasons that will become clear in a moment.)

Dissatisfied with after-market batteries I have purchased in the past, I decided this time to buy an actual Minolta battery for my Minolta camera. The listing I purchased from described the item as a “Konica Minolta DiMAGE X Replacement Battery,” and even had a link immediately below to “Other products by Konica-Minolta” (sic), which went to a page of authorized, branded goods from the electronics giant. Overall, the listing gives several indications that suggest an actual, new Minolta battery is being offered.

I received the battery today, only to find that what I received was an OEM battery of completely unknown quality and a completely different brand. (I haven’t bothered to test it out yet—but in any case, my problem with OEM batteries is that they often stop holding a charge after a month or two of use.) The purchase price was small, and the unrefundable shipping and handling represented about a third of the cost. Still, I wrote to the merchant to request a return authorization.

The merchant called immediately with an unusual story to tell. He claimed that his original listing was entirely accurate, and that the title listed his house brand name (“Wasabi”) and not Konica Minolta. But, he advised, Amazon allows other merchants who sell “the same item” to include themselves as a seller of the item and to make modifications to the page.  Another seller, he claimed, changed the listing to misrepresent the item.

Even if another seller makes changes that are inaccurate or, in this case, obviously fraudulent and infringing of strong trademarks, the seller complained to me that there is nothing the original seller can do. He claims that when he has previously advised Amazon of similar changes, Amazon often failed to correct the listings and informed him there was nothing he could do about it—that later changes take priority.

There is indeed a second seller listed as offering this item, though the merchant I purchased from is still the default seller on the page and, in fact, the second seller offers “the same item” at a higher price. (It’s unclear what the second seller actually sells–a Konica Minolta battery or a different after-market compatible battery. Amazon has several other pages offering several other after-market batteries.) There were no reviews of the item until I wrote one today noting the misrepresentation.

The merchant indicated that he hadn’t noticed this particular error (he sells a great deal of after-market items) but that when he received my complaint he immediately requested Amazon correct the page. Amazon, he said, rejected the changes. (As of the end of the day, the original page is still intact.) He offered to—and later did—fully refund my purchase including the shipping and handling, and told me to keep the item anyway. I told him I would contact Amazon, which he encouraged me to do, and asked me to call him back if they didn’t confirm everything he had told me.

Amazon denied everything he told me. Specifically, the customer service representative told me that he would file a complaint against the merchant but that, “I can tell you that what he told you is completely inaccurate.” (I don’t think the call was being recorded—in any case, I wasn’t notified if it was.)

Worse, when I asked them to get the merchant on the call, the customer service representative agreed but told me that “for legal purposes” he would not be able to confirm or deny anything he had previously said to me once the merchant got on the call. (That strikes me as the kind of “urban myth” legal advice that has no actual value but which gets passed along all the time.) True to his promise, the Amazon CSR listened politely as the merchant repeated his explanation of a serious breakdown in Amazon’s process, told in the presence of a customer, and neither confirmed nor denied it, much to the merchant’s frustration.

After the call, the merchant emailed me the following quotation from, he says, the Seller Support page’s “Detail Page Control” process:

In most categories, multiple sellers sell the same product through a single detail page. This provides an organized, uniform product presence in our catalog and increases the convenience of comparison shopping for potential buyers.

The information displayed on an Amazon single detail page, called “reconciled” data, is drawn from multiple seller contributions. When a seller contributes product information to an existing item in our catalog, a decision is made about whether or not to display any changes to the product details on the single detail page. This decision is processed automatically according to business logic known as “Detail Page Control.” Detail Page Control determines which of the available product descriptions, features, titles, and additional details are displayed on the single detail page.

The selection is made based on which contributing seller has greater Detail Page Control as determined by our automated system. This could be Amazon or any seller offering the item. Detail Page Control rankings are not modified manually, but are regularly reviewed and updated automatically by our system. Some factors that affect Detail Page Control are a seller’s sales volume, refund rate, buyer feedback, and A-to-z Guarantee claims.

I can’t say if this is actually what the Amazon page says (it is behind a firewall for sellers) or what, in addition, the page says regarding fraudulent information and information that constitutes potential trademark infringement or other actionable unfair trade practices. Nor is it clear what controls exist over merchants claiming to sell “the same product” but who in fact sell something different. Nor can I say why, when asked directly by the merchant on the call to assure the customer (me) that the merchant was accurately describing Amazon’s process, the CSR refused to “confirm or deny anything.”

The description of the system, if accurate, suggests a significant level of control exercised by Amazon over the accuracy and quality of the content of its listings. It goes beyond what I understand to be the level of control exercised by eBay.

Could this prove definitive in trademark disputes brought by companies such as, oh I don’t know, Konica Minolta? Perhaps so. Could it serve as evidence eBay could use to make the case that it is “less” of a storefront than Amazon in eBay’s own litigation? Well, I’d certainly offer it were I representing eBay. (I do not represent eBay. Or Amazon. Or Konica Minolta. Or the merchant in this case.)

The merchant, understandably upset if he is telling the truth, wrote that he owns a small business that provides for his wife and son and is greatly distressed that I have had a bad experience with him for which he cannot get Amazon to take the blame. Though I am a long-time and very satisfied Amazon customer, I have had enough bad experiences with their third party Amazon marketplace resellers to suspect he is telling the truth here.

That is, it seems plausible to me that another seller sabotaged his listing and that Amazon’s processes aren’t good enough to correct that behavior on a timely basis. A few well-placed lawsuits by brand holders ought to take care of the problem, if in fact there is a problem. But even an honest merchant is a small cog in a big machine, with little recourse except perhaps to move his business to another marketplace. (But then he would have to learn another perhaps imperfect system and would lose all the reputation value of his nearly 4,000 customer reviews.)

One disturbing detail, however. After the calls I went back and looked at the sales receipt that accompanied the battery, which was shipped by the merchant and not Amazon. The receipt, printed under the merchant’s letterhead, includes an SKU that I assume to be the merchant’s and not Amazon’s. In any case, the item description repeats the description on the Amazon page, that is, it describes the item as a Konica Minolta battery and not the merchant’s house branded OEM.

I asked the merchant by way of follow-up to explain his Sales Receipt. He writes, “We use the title of the product (as listed on Amazon) on our printed sales receipt. If the product title changes on Amazon, so too will the description as it appears on our sales receipt.”

I suspect what he means is that either he uses Amazon’s systems or that his system pulls its data on-demand from Amazon. If so, here’s another interesting legal problem raised by the move to cloud computing. Who’s responsible when bad data generates actionable misrepresentations?

The Other Side of Privacy

After attending last week’s Federal Trade Commission online privacy roundtable, I struggled for several days to make some sense out of my notes and my own response to calls for new legislation to protect consumer privacy. The result was a 5,000 word article—too long for nearly anyone to read. More on that later.

Even as the issue of privacy continues to confound much brighter people than me, however, the related problem of securing the Internet has also been getting a great deal of attention. This is in part due to the widely-reported announcement from Google that its servers and the Gmail accounts of Chinese dissidents had been hacked, leading the company to threaten to leave China altogether if its government continues to censor search results.

Both John Markoff of the New York Times and Declan McCullagh of CBS Interactive have also been back on the beat, publishing some important stories on the state of American preparedness for cyberattacks (not well prepared, they conclude) and on the continued tension between privacy and law enforcement. See in particular Markoff’s stories on Jan. 26 and on Feb. 4th and McCullagh’s post on Feb. 3.

Markoff reports a consensus view that the U.S. does not have adequate defensive and deterrent capabilities to protect government and critical infrastructure from cyberattacks. Even worse, after years of effort and studies, the author of the most recent effort to craft a national strategy told him “We didn’t even come close.”

Markoff reports that Google has now asked the National Security Agency to investigate the attacks that led to its China announcement and the subsequent exchange of hostile diplomacy between the U.S. and China. Dennis C. Blair, director of the Office of National Intelligence, told Congress earlier this week that “Sensitive information is stolen every daily from both government and private-sector networks….”

That finding seems to be buttressed by findings in a new study sponsored by McAfee. As Elinor Mills of CNET reported, 90% of survey respondents from critical infrastructure providers in 14 countries acknowledged that their enterprises had been the victim of some kind of malware. Over 50% had experienced denial of service attacks.

These attacks and the lack of adequate defenses are leading companies and law enforcement agencies to work more closely, if only after the fact. But privacy advocates, including the Electronic Frontier Foundation and the Electronic Privacy Information Center, are concerned about increasingly cozy relations between major Internet service providers and law enforcement agencies including the NSA.

They are likely to become apoplectic, however, when they read McCullagh’s post. He reports that a federal task force is about to release survey results that suggest law enforcement agencies would like an easier interface to request customer data from cell phone carriers and rules that would require Internet companies to retain user data “for up to five years.” The interface would replace the time-consuming and expense paper warrant processes now necessary for investigators to gain access to customer records.

Privacy advocates and law enforcement agencies are simply arguing past each other, with Internet companies trapped in the middle. Unmentioned at the FTC hearing—largely because law enforcement is out of the scope of the agency’s jurisdiction—is the legal whipsaw that Internet companies are currently facing. On the one hand, privacy and consumer regulators in the U.S., Europe and elsewhere are demanding that information collectors, including communications providers, search engines and social networking sites, purge personally-identifiable user data from their servers within 12 or even 6 months.

At the same time, law enforcement agencies of the very same governments are asking the same providers to retain the very same data in the interest of criminal investigations. Frank Kardasz, who conducted the law enforcement survey, wrote in 2009 that ISPs who do not keep records long enough “are the unwitting facilitators of Internet crimes against children.” Kardazs wants laws that “mandate data preservation and reporting,” perhaps as long as five years.

ISPs and other Internet companies are caught between a rock and a hard place. If they retain user data they are accused of violating the privacy interests of their consumers. If they purge it, they are accused of facilitating the worst kinds of crime. This privacy/security schizophrenia has led leading Internet companies to the unusual position of asking for new regulations, if only to make clear what it is governments want them to do.

The conflict becomes clear just by considering one lurid example (the favorite variety of privacy advocates on both sides) that was raised repeatedly at the FTC hearing last week. As long as service providers retain data, the audience was told, there is the potential for the perpetrators of domestic violence to piece together bits and pieces of that information to locate and continue to terrorize their victims. Complete anonymization and deletion, therefore, must be mandated.

But turn the same example around and you reach the opposite conclusion. While the victim of the crime is best protected by purging, capturing and prosecuting the perpetrator is easiest when all the information about his or her activities has been preserved. Permanent retention, therefore, must be mandated.

This paradox would be easily resolved, of course, if we knew in advance who was the victim and who was the perpetrator. But what to do in the real world?

For the most part, these and other sticky privacy-related problems are avoided by compartmentalizing the conversation—that is, by talking only about victims or only about perpetrators. As Homer Simpson once said, it’s easy to criticize, and fun too.

Unfortunately it doesn’t solve any problem, nor does it advance the discussion.

The Growth of Digital Life in Numbers

As I write in The Laws of Disruption, the pace with which digital life is developing and expanding is easy to measure but impossible to comprehend.  Changes in the ways in which we interact, experience entertainment and other information content, and exist as citizens of a digital realm happen so fast they outstrip our ability to stand back and observe them.

A website called Royal Pingdom has published some interesting metrics for 2009 Internet activity, which might help the quantitatively-minded to get their heads around the information revolution.

Here are a few that stood out for me:

– 90 trillion emails were sent in 2009 by 1.4 billion users.  That’s the good news.  The bad news is that 81% of those emails were spam.  It’s amazing that despite all that wasted traffic, much of it stopped before it reaches a user’s mailbox, the network continues to function at higher levels of performance all the time.

– There are now 1.73 billion Internet users worldwide.  Despite the economic chaos of last year (or perhaps in part because of it), that number represents an 18% increase in the number of users.  Nearly 1 billion of those users are in Asia.

– YouTube now serves up 1 billion videos each day.  It would, I think, greatly aid the debate over copyright and “piracy” to know what percentage of those videos are legally licensed to YouTube.  My guess is that it’s a much higher number than most people would guess.

Bret Swanson at Digital Society posted more statistics showing the growth of Internet activity over the course of the entire decade.  My favorite:  Google’s index of pages in 2000 covered 1 billion website pages.  By 2008 the number was up to 1 trillion.

These are not numbers I can visualize.  Can anybody else?

The White House’s New Internet Policy, and thoughts on Comcast v. FCC

I published the first of two pieces on CNET today about interesting and even encouraging developments in Washington over Internet policy. (See “New Year, New Policy Push for Universal Broadband”)

In short, I believe that over the past year the Obama administration has come to see Internet products and services as one of the best hopes for economic recovery and continued competitiveness for U.S. businesses.  At least as a matter of policy, this is the first administration to see digital life as a source of competitive advantage.

Tomorrow’s piece concerns the “spectrum crisis” and what the federal government hopes to do to solve it. (The federal government “owns” the radio waves, after all.)

Cut due to the length of the piece was a longer analysis of the arguments a few weeks ago in the U.S. Court of Appeals for the D.C. Circuit in Comcast v. FCC, in which cable provider Comcast challenged a sanction the FCC issued in 2008 for the company’s attempts to limit use by some customers of peer-to-peer applications including BitTorrent.

Depending on how the court rules, the commission’s proposed Net neutrality rules could be dead in the water for lack of authority to regulate in this space.

Here’s the longer version of that section:

The fate of net neutrality may depend on the outcome of important development that took place during CES, but at the other end of the country. This involved litigation over the one notorious instance of non-neutral behavior that largely reawakened the net neutrality debate in 2008.

Comcast admitted that it has used some fairly clumsy techniques to limit the speed or sometimes the availability of peer-to-peer Internet services that allowed users to share very large files, notably using the BitTorrent protocol. In the wake of that revelation, Comcast agreed to change its practices and to make them more transparent, and made peace with BitTorrent developers. (They are also in the process of settling a class-action lawsuit brought by Comcast customers affected by the limits.)

The FCC issued a non-monetary sanction against the company, claiming that the techniques violated net neutrality principles which, while not formally enacted by the FCC, nonetheless applied to Comcast. Comcast challenged the sanctions in the U.S. Court of Appeals for the D.C. Circuit, which hears all challenges to FCC rulings.

On Jan. 8th, hours before Chairman Genachowski took the stage at CES, the D.C. Circuit heard oral arguments in Comcast’s appeal. As was widely reported, the three-judge panel considering the appeal questioned government lawyers severely.

Some statements from the judges suggested they were skeptical at best about the Commission’s authority to sanction Comcast. Chief Judge Sentelle, who sat on the panel, complained about the Commission’s view of its own authority. “You can’t get an unbridled, roving commission to go about doing good,” he was reported to have said. Judge Randolph, another panelist, complained that the lawyers for the FCC could not “identify a specific statute” that authorized the sanction.

The case is certainly important, though too much was read into the tone of the arguments by a number of mainstream media sources. As a former circuit court law clerk, I can attest to FCC Commissioner McDowell’s warning at CES not to draw conclusions about the outcome of the case from comments or even the appearance of hostility by appellate judges at an oral argument. (McDowell, notably, was one of the Commissioners who dissented from the Comcast sanction, on the grounds that the FCC did not have the authority to issue it.)

Wired, for example, ran the extreme headline: “Court to FCC: You Don’t Have Power to Enforce Net Neutrality.”

The accompanying article was a little less hyperbolic, but still misleading: “A federal appeals court gave notice Friday it likely would reject the Federal Communications Commission’s authority to sanction Comcast for throttling peer-to-peer applications.” That was an interpretation of the arguments echoed in many publications.

There is, however, no way to predict from the oral arguments how appellate judges are “likely” to rule. They may have just been in a bad mood, or annoyed with the government’s lawyers for reasons unrelated to the merits of the appeal. Unlike political office holders, federal judges are appointed for life, and do not measure their questions or comments at oral arguments to signal how they are likely to rule in a case.

Indeed, the judges may have objected not so much to the conclusion urged by the FCC as much as the line of reasoning the Commission followed in its briefs. The Commission may have relied too much on its general authority to regulate communications companies, for example, rather than citing more specific regulatory powers that Congress and the courts have already recognized.

Still, the outcome in this case could have serious repercussions for the proposed net neutrality rules. Why? Most of the FCC’s rulemaking authority comes from longstanding regulatory power over telephone companies, classified as “common carriers” who must follow nondiscriminatory practices overseen closely by the Commission.

But cable Internet providers are not common carriers, and indeed the FCC itself argued that point successfully in a 2005 Supreme Court case. The FCC later determined that traditional phone companies, when offering broadband Internet service, were also not subject to common carrier regulations.

So if broadband Internet services are not subject to common carrier rules, where does the FCC get the authority to propose net neutrality rules in the first place?

The Commission argued both in the Comcast case and in its proposed net neutrality rules that its jurisdiction comes from “traditional ancillary authority,” that is, from authority that is implicit in the governing statute that defines the FCC’s power. The skepticism expressed at oral argument seemed to be focused on the argument that ancillary jurisdiction was all the FCC needed to sanction Comcast’s behavior.

The D.C. Circuit could rule that such authority does not extend so far as to allow the FCC to create or enforce net neutrality. It could also rule more narrowly, and reject the sanctions only on the grounds that they were not issued pursuant to a formal rulemaking–that is, the kind of rules now being considered. Or, of course, the court could agree with the Commission that ancillary authority is sufficient both to issue the sanctions and to enact formal rules.

The proposed rules are not directly at issue in the Comcast case. Even if the court rules that ancillary jurisdiction is insufficient to make net neutrality rules (as, among others, the Electronic Frontier Foundation has argued, see “FCC Perils and Promise”), the FCC could technically still go ahead with its rulemaking. But the court would surely hold the new rules exceed the agency’s power, and pretty quickly.

Rather than pass rules that would be dead on arrival, the Commission would likely head back to Congress for explicit authority to define and enforce net neutrality regulations. Since 2007, there have been several bills floating around committees that would grant precisely that power (and, indeed, mandate that the FCC use it) but none of them has yet to be reported out. There are also bills explicitly forbidding the FCC from enacting net neutrality rules, also sitting in committee.

Net Neutrality Doublespeak: Deep Packet Inspection is a Bad Idea, Except When it Isn’t

An interesting tempest in a teapot has emerged this week following some overblown rhetoric by and in response to celebrity causemeister Bono. There’s a deeper lesson to the incident, however, one with important implications for the net neutrality debate. (More on that in a moment.)

In a New York Times op-ed column on Jan. 2, 2010, Bono provided “10 ideas that might make the next 10 years more interesting, healthy or civil.” These include the salvation of the entertainment industry from the clutches of peer-to-peer file sharers, who are just a few turns of Moore’s Law away from being able to “download an entire season of “24” in 24 seconds.”

“Many will expect to get it for free,” Bono laments, apparently unaware that in the U.S., we don’t have a mandatory television license for television content as they do in the U.K. (U.K. residents pay £142.50 a year tax, the principal source of income for the BBC.) So long as you watch 24 when Fox broadcasts it, you will expect to and indeed will get it “for free,” without breaking any laws whatsoever. Hooray for America.

Bono’s proposal to solve this problem, also factually challenged, is to force ISPs to clean up the illegal sharing of copyrighted content:

We’re the post office, they tell us; who knows what’s in the brown-paper packages? But we know from America’s noble effort to stop child pornography, not to mention China’s ignoble effort to suppress online dissent, that it’s perfectly possible to track content. Perhaps movie moguls will succeed where musicians and their moguls have failed so far, and rally America to defend the most creative economy in the world….

As several commentators have already pointed out, America’s “noble effort to stop child pornography” has almost nothing to do with looking inside the broken up pieces of Internet transactions, known as  “deep packet inspection.”  Indeed, as I write in Law One (“Convergence”) of The Laws of Disruption, most federal and state efforts at solving that scourge at least in the online world have been so broad and clumsy that they instantly fail First Amendment scrutiny. (Another feature of American law that Bono may not fully appreciate.) Congress has tried three times to pass laws on the subject, two of which were declared unconstitutional and the third reigned in to be almost meaningless.

State efforts have been even more poorly-crafted. I write in the book about Pennsylvania’s Child Sexual Exploitation Unit, formed in 2008 by act of the Pennsylvania legislature. Staffed by three former state troopers, the CSEU “analysts” surfed the web looking for sites they felt contained child porn, then wrote letters demanding that ISPs block access to those sites for all their Pennsylvania customers. (The easiest way for large ISPs including AOL and Verizon to do that was simply to block the sites, period.)

Aside from the lack of any training or standards by the regulators, the sites that made the list included several host sites with hundreds or thousands of private websites that had nothing to do with pornography of any kind. By the time the courts put the CSEU out of business a year later, Pennsylvania had banned 1.19 million websites, only 376 of which actually contained content the troopers deemed offensive. (An official geographic survey of Spain and the International Philatelic Society made the banned list.) There was also no mechanism for getting a web address off the list, even if the ownership and contents changed hands.

But that’s a mere quibble, as is the fact that Chinese censorship of content, hardly a “best practice,” apparently includes some 30,000 Internet police and perhaps millions of servers—and even then, the surveillance appears to be on the back-end, after the packets have already been reassembled. (Not surprising, China hasn’t exactly published its processes in the Harvard Business Review.)

Regular readers of this blog will be expecting the twist ending, and here it comes. I’m less interested in the misinformed opinions of a musician and humanitarian than in the response it drew from Internet activists. Gigi Sohn of Public Knowledge characterized Bono’s proposal as “mind-bogglingly ignorant” both as to what really caused the fall of the music industry and the technology that would be required for ISPs to become the content police on behalf of copyright owners. Packet filtration, Sohn points out, would lead to “blocking lawful content and encouraging an encryption arms race that would allow filesharing to proceed unabated.” And anyway, the real problem here is overprotective IP laws. (I agree.)

Somewhat less hyperbolic, Leslie Harris of the Center for Democracy and Technology (CDT) wrote today on The Huffington Post that ISPs are taking concrete and responsible steps stop to reduce child pornography that don’t include deep packet inspection, and reiterated Sohn’s point about an encryption arms race.

More interesting, however, Harris notes the danger of mandating ISPs to exert “centralized control over Internet communications.” Harris writes:

In this country, ISPs do not control what their users send to the Internet any more than a phone company controls the topics of someone’s phone call. Does the U.S. really want to move in the direction of the Chinese model of always-on surveillance? Once we begin to break into all Web traffic to search for copyright violations, evaluating content for its “decency” or appropriateness for children, then analyzing each user’s search habits to determine buying habits and government surveillance without lawful process (remember the NSA warrantless wiretapping) will follow close behind.

The U.S. has the most vibrant, free and innovative Internet because we don’t have gatekeepers in the middle of the network.

Well, at least we don’t yet.

As I’ve pointed out before (see, for example, “Zombieland – The Return of Net Neutrality”) my principal concern with net neutrality is not the idea that information should flow freely on the Internet. That’s a fine principle, and central to the success of this largely unregulated, non-proprietary infrastructure.

Rather, I worry about the unfortunate details of implementation. If net neutrality also means that ISPs are forbidden from offering premium or priority routing within the back-end segments of the network they control (that is, the last mile to the consumer), then it will necessarily fall to the government to monitor, audit, and investigate the flow of packets across the network, if only in response to complaints by consumers of real or perceived non-neutral behavior.

Under the rules proposed in the fall, the FCC has said only that it will investigate complaints of non-neutrality on “a case-by-case basis;” under the proposed Internet Freedom Preservation Act, any consumer would have the right to complain directly to the FCC, which would be required to investigate all complaints within 90 days.

How else can the FCC determine whether some packets are being given priority in defiance of neutrality rules without intercepting at least a random subset of those packets and opening them up?

Very quickly, the enforcement of net neutrality would lead us  into the “model of always-on surveillance,” not by ISPs but, worse, by federal regulators. The opportunities for linking the FCC’s enforcement powers with “government surveillance” will be even more irresistible than if would be if the ISPs were the ones exerting the “centralized control.”

This, of course, is a worst case scenario, but that is not to say that the risk of the worst case scenario becoming reality is particularly low. Indeed, the history of FCC interference with broadcast TV content, a long a sad story that persists to this day, suggests that the worst case scenario is also the most likely.

(On enforcement, Public Knowledge says only that it “supports a Network Neutrality system that can be enforced through a simple complaint process managed by the Federal Communications Commission, where the network operator must bear the burden of demonstrating that any interference with traffic is necessary to support a lawful goal.” Simple for whom? The complainant, not the investigator.)

I agree that the U.S. has the most free and innovative Internet because we don’t have “gatekeepers in the middle of the network.” So why do groups including Public Knowledge and the CDT, who clearly understand the risks of private and–even worse–of public interference with the flow of packets, advocate so strongly in favor of neutrality rules?

Perhaps because, like Bono, they haven’t thought through the implications of their rhetoric.