Category Archives: Digital Life

Fox-Cablevision and The Net Neutrality Hammer

When the only thing you have is a hammer, as the old cliché goes, everything looks like a nail.

Net neutrality, as I first wrote in 2006, is a complicated issue at the accident-prone intersection of technology and policy.  But some of its most determined—one might say desperate—proponents are increasingly anxious to simplify the problem into political slogans with no melody and sound bites with no nutritional value.  Even as—perhaps precisely because—a “win-win-win” compromise seems imminent, the rhetorical excess is being amplified.  The feedback is deafening.

In one of the most bizarre efforts yet to make everything be about net neutrality, Public Knowledge issued several statements this week “condemning” Fox’s decision to prohibit access to its online programming from Cablevision internet users.  In doing so, the organization claims, Fox has committed “the grossest violations of the open Internet committed by a U.S. company.”

This despite the fact that the open Internet rules (pick whatever version you like) apply only to Internet access providers.  Indeed, the rules are understood principally as a protection for content providers.  You know, like Fox.

OK, let’s see how we got here.

The Fox-Cablevision Dispute

In response to a fee dispute between the two companies, Fox on Saturday pulled its programming from the Cablevision system, and blocked Cablevision internet users from accessing Fox programming on-line.  Separately, Hulu.com (minority owned by Fox) enforced a similar restriction, hoping to stay “neutral” in the dispute.  Despite the fact that “The Simpsons” and “Family Guy” weren’t even on this weekend (pre-empted by some sports-related programming, I guess), the viewing public was incensed, journalists wrote, and Congress expressed alarm. The blackout, at least on cable, persists.

A wide range of commentators, including Free State Foundation’s Randolph May, view the spat as further evidence of the obsolescence of the existing cable television regulatory regime.  Among other oddities left over from the days when cable was the “community antenna” for areas that couldn’t get over-the-air signals, cable providers are required to carry local programming without offering any competing content.   But local providers are not obliged to make their content available to the cable operator, or even to negotiate.

As cable technology has flourished in both content and services, the relationship between providers and content producers has mutated into something strange and often unpleasant.  Just today, Sen. John Kerry sent draft legislation to the FCC aimed at plugging some of the holes in the dyke.  That, however, is a subject for another day.

Because somehow, Public Knowledge sees the Fox-Cablevision dispute as a failure of net neutrality.  In one post, the organization “condemns” Fox for blocking Internet access to its content.  “Blocking Web sites,” according to the press release, “is totally out of bounds in a dispute like this.”  Another release called out Fox, which was said to have “committed what should be considered one of the grossest violations of the open Internet committed by a U.S. company.”

The Open Internet means everything and nothing

What “open Internet” are they talking about?  The one I’m familiar with, and the one that I thought was at the center of years of debate over federal policy, is one in which anyone who wants to can put up a website, register their domain name, and then can be located and viewed by anyone with an Internet connection.

In the long-running net neutrality debate, the principal straw man involves the potential (it’s never happened so far) for Internet access providers, especially large ones serving customers nationally, to make side deals with the operators of some websites (Google, Amazon, Microsoft, Yahoo, eBay, perhaps) to manipulate Internet traffic at the last mile on their behalf.

Perhaps for a fee, in some alternate future, Microsoft would pay extra to have search results from Bing given priority, making it look “faster” than Google.  That would encourage Google to strike a similar deal and, before you know it, only the largest content providers would appear to be worth visiting.

That would effectively end the model of the web that has worked so well, where anyone with a computer can be a publisher, and the best material has the potential to rise to the top.  Where even entrepreneurs without a garage can launch a product or service on a shoestring and, if enough users like it, catapult themselves into being the next Google, eBay, Facebook or Twitter.

What does any of this have to do with Fox’s activities over the weekend?

As Public Knowledge sees it, any interference with web content is a violation of the open Internet, even if that interference is being done by the content provider itself! Fox has programming content on both its own site and on the Hulu website, content it places there, like every other site operator, on a voluntary basis.

But, having once made that content available for viewing, according to Public Knowledge, it should be a matter of federal law that they keep it there, and not limit access to it in any way for any consumer anywhere at any time.  It’s only consumers who have rights here:  “Consumers should not have their access to Web content threatened because a giant media company has a dispute over cable programming carriage.”  (emphasis added)

On this view, it’s not content owners who have rights (under copyright and otherwise) to determine how and when their content is accessed.  Rather, it is the consumer who has an unfettered right to access any content that happens to reside on any server with an Internet connection.  Here’s the directory to everything on my computer, dear readers.  Have at it.

The “Government’s Policy” Explained

Indeed, according to PK, this remarkable view of the law has long-since been embraced by the FCC.  “We need to remember that the government’s policy is that consumers should have access to lawful content online, and that policy should not be disrupted by a programming dispute.”

Here’s how Public Knowledge retcon’s that version of “the government’s policy.”

Until this spring, the 2005 Federal Communications Commission (FCC) policy statement held that Internet users had the right to access lawful content of their choice.  There was no exception in that policy for customers who happened to have their Internet provider caught up in a nasty retransmission battle with a broadcaster.

Said policy statement that was struck down [sic] on April 6 by the U.S. Appeals Court, D.C. Circuit, when Comcast challenged the enforcement of the policy against the company for blocking users of the BitTorrent [sic].

The policy statement was based on the assumption that if there were a bad actor in preventing the consumer from seeing online content, it would be an Internet Service Provider (ISP) blocking or otherwise inhibiting access to content.  In this case, of course, it’s the content provider that was doing the blocking.  It’s a moot point now, but it shouldn’t matter who is keeping consumers away from the lawful content. (emphasis added)

Where to begin?  For starters, the policy statement was not “struck down” in the Comcast case.  The court held (courts do that, by the way, not statements of policy) that the FCC failed to identify any provision of the Communications Act that gave them the power to enforce the policy statement against Comcast.

That is all the court held.  The court said nothing about the statement itself, and even left open the possibility that there were provisions that might work but which were not cited by the agency.  (The FCC chose not to ask for a rehearing of the decision, or to appeal it to the U.S. Supreme Court.)

Moreover, there is embedded here an almost willful misuse of the phrase “lawful content.”  Lawful content means any web content other than files that users want to share with each other without license from copyright holders, including commercial software, movies, music, and documents.  None of that activity (much of what BitTorrent is still used for, by the way–the source of the Comcast case in the first place) is “lawful.”  The FCC does not want to discourage—and may indeed want to require—ISPs from interfering, blocking, and otherwise policing the access of that unlawful content.

Here, however, PK reads “lawful content” to mean content that the user has a lawful right to access, which, apparently, is all content—any file on any device connected to the Internet.

But “lawful content” does not somehow confer proprietary rights to consumers to access whatever content they like, whenever and however they like.  The owner of the content, the entity that made it available, can always decide, for any or no reason, to remove it or restrict it.   Lawful content isn’t a right for consumers—it just means something other than unlawful content.

Still, the more remarkable bit of linguistic judo is the last paragraph, in which the 2005 Open Internet policy statement becomes not a policy limiting the behavior of access providers but of absolutely everyone connected to the Internet.

The opposite is utterly clear from reading the policy statement, which addressed itself specifically to “providers of telecommunications for Internet access or Internet Protocol-enabled (IP-enabled) services.”

But that language, according to Public Knowledge, is just an “assumption.”  The FCC actually meant not just ISPs but anyone who can possibly interference with what content a user can access, which is to say anyone with a website.  When it comes to consumer access to content, it “shouldn’t matter” that the content provider herself decides to limit access.  The content, after all, is “lawful,” and therefore, no one can “[keep] consumers away” from it.

The nonsensical nature of this mangling of completely clear language to the contrary becomes even clearer if you try for a moment to take it to the next logical step.  On PK’s view, all content that was ever part of the Internet is “lawful content,” and, under the 2005 policy statement, no one is allowed to keep consumers away from it, including, as here, the actual owners of the content.

So does that mean that having put up this website (I presume the content is “lawful”), I can’t at some future date take it down, or remove some of the posts?

Well maybe the objection is just to selective limitation.  Having agreed to the social contract that comes with creating a website, I’ve agreed to an open principal (enforceable by federal law) that requires my making it freely and permanently available to anyone, anywhere, who wants to view it.  I can’t block users with certain IP addresses, whether that blocking is based on knowledge that those addresses are spammers, or residents of a country with whom I am not legally permitted to do business, or, as here, are customers of a company with whom I am engaged in a dispute over content in another channel.

But of course selective limitation of content access is a feature of every website.  You know, like the kind that comes with requiring a user to register and sign in (eBay), or accept cookies that allow the site to customize the user’s experience (Yahoo!), or pay a subscription fee to access some or all of the information (The Wall Street Journal, The New York Times), or that requires a consumer see not just the “lawful content” they want but also, side-by-side, advertising or other information that helps pay for the creation and upkeep of the site (Google, everyone else).

Or that allows a user to view a file but not to copy and resell copies of it (streaming media).  Or that limits access or use of a web service by geography (banking, gambling and other protected industries).   Or that require users to grant certain rights to the site provider to use information provided by the user (Facebook, Twitter) in exchange for use of the services.

Paradise Lost by the D.C. Circuit’s Comcast Decision

Or maybe just when Fox does it?

Under PK’s view of net neutrality, the Web is a consumer paradise, where content magically appears for purely altruistic reasons and stays forever to frolic and interact.  Fox can’t limit, even arbitrarily and capriciously, who can and cannot watch its programming on the Web.  It must make it freely available to everyone and anyone, or face condemnation by self-appointed consumer advocates who will, as prosecutor, judge and jury, convict them of having committed “the grossest violations” possible of the FCC’s open Internet policy.

That is, if only the law that PK believes represents longstanding “government policy” was still on the books.  For the real tragedy of the Fox-Cablevision dispute is that the FCC is now powerless to enforce that policy, and indeed, is powerless to stop even the “grossest violations.”

If only the D.C. Circuit hadn’t ruled against the FCC in the Comcast case, then the agency would, on this view, be able to stop Fox and Hulu from restricting access to Fox programming from Cablevision internet customers.  Or anyone else.  Ever.

That of course was never the law, and never will be.   More-or-less coincidentally, the FCC has limited jurisdiction over Fox as a broadcaster, but not to require it to make its programming available on the web, on-demand to everyone who wants to see it.

Fox aside, there is nothing in The Communications Act that could possibly be thought to extend the agency’s power to policing the behavior of all web content providers, which these days includes pretty much every single Internet user.

Nor did the Open Internet policy statement have anything to say about content providers, period.  If it had, it would have represented an ultra vires extension of the FCC’s powers that would have shamed even the most pro-regulatory cheerleader.  It would never have stood up to any legal challenge (First Amendment?  Fifth Amendment?  For starters…)

Not only does it matter but it certainly “should matter who is keeping consumers away from lawful content.”  When the “who” is the owner of the content itself, they have the right and almost certainly the need to restrict access to some or all consumers, now or in the future, without having to ask permission from the FCC.

And thank goodness.  An FCC with the power to order content providers to make content available to anyone and everyone, all the time and with no restrictions, would surely lead to a web with very little content in the first place.

Who would put any content online otherwise?  Government agencies?  Not-for-profits?  Non-U.S. users not subject to the FCC?   (But since their content would be available to U.S. consumers, who on the PK view have all the rights here, perhaps the FCC’s authority, pre-Comcast, extended to non-U.S. content providers, too.)

Not much of a web there.

No one Believes This—Including Public Knowledge

The wistful nostalgia for life before the Comcast decision is beyond misguided.  No proposal before or since would have changed the fundamental principal that open Internet rules apply to Internet access providers only.

Under the detailed net neutrality rules proposed by the FCC in 2009, for example, the Policy Statement would be extended and formalized, but would still apply only to “providers of broadband Internet access service.”  Likewise the Google-Verizon proposed legislative framework.  Likewise even the ill-advised proposal to reclassify broadband Internet access under Title II to give the FCC more authority—it’s still more authority only over access providers, not just anyone with an iPhone.

(Though perhaps PK is hanging its hopes on some worrisome language in the Title II Notice of Inquiry that might extend that authority, see “The Seven Deadly Sins of Title II Reclassification.”)

Public Knowledge has never actually proposed its own version of net neutrality legislation.  So I guess it’s possible that they’ve imagined all along that the rules would apply to content providers as well as ISPs.

Well, but the organization does have a “position” statement on net neutrality.  And guess what?  It doesn’t line up with their new-found understanding of the 2005 FCC Policy statement either.   Public Knowledge’s own position on net neutrality addresses itself solely to limits and restrictions on “network operators.”   (E.g., “Public Knowledge supports a neutral Internet where network operators may offer different levels of access at higher rates as long as that tier is offered on a nondiscriminatory basis to every other provider.”)

So apparently even Public Knowledge is among the sensible group in the net neutrality debate who reject the naïve and foolish idea that “it shouldn’t matter who is keeping consumers away from the lawful content.”

Did the rhetoric just get away from them over there, or are those who support Public Knowledge’s push for net neutrality really supporting something very different–different even than what the organization says it means by that phrase?  Something that would extend federal regulatory authority to every publisher of content on the web, including you?

I’m not sure which answer is more disturbing.

The Net Neutrality Sausage Factory Ramps up Production

My article for CNET News.com this morning analyzes the “leaked” net neutrality bill from Rep. Henry Waxman, chair of the House Energy and Commerce Committee.  I put leaked in quotes because so many sources came up with this document yesterday that its escape from the secrecy of the legislative process hardly seems dramatic.  Reporters with sources inside Waxman’s office, including The Hill and The Washington Post, expect Waxman to introduce the bill sometime this week.

The CNET article goes through the bill in some detail, and I won’t duplicate the analysis here.  It is a relatively short piece of legislation that makes limited changes to Title I of the Communications Act, giving the FCC only the authority it needs to implement “core” regulations that would allow the agency to enforce violations of the open Internet principles.

With a few notable exceptions, the Waxman bill mirrors both the FCC’s own proposed rulemaking from last October (still pending) as well as the Google-Verizon legislative framework the two companies released in July.  All three begin with the basic open Internet rules that originate with the FCC’s 2005 policy statement, with some version of a content nondiscrimination rule and a transparency rule added.    (There’s considerable controversy over the wording of the nondiscrimination rule; none about transparency.)

The Waxman draft would sunset at the end of 2012.  And it asks the FCC to report to Congress sooner than that if any additional provisions are required to implement key features of the National Broadband Plan, which has sadly been lost to the tempest-in-a-teapot wrangling over net neutrality before and since it was published.

Many commentators who are already condemning Waxman for selling out “real” net neutrality are upset over provisions—including the sensible call for case-by-case adjudication of complaints rather than the fool’s errand of developing detailed rules and regulations in advance—that appear in all three documents.  They either don’t know or don’t care that in that regard Waxman’s bill breaks no new ground.

Where the bill differs most is its treatment of wireless broadband.  The FCC’s October NPRM, albeit with reservations, ultimately concluded that wireless should not be treated any differently than wireline broadband.  Google-Verizon reached the opposite conclusion, encouraging lawmakers to leave wireless broadband out of any new rules, at least until the market and the infrastructure that supports it become more stable.

Waxman’s draft calls for limited application of the rules to wireless broadband, in particular prohibiting carriers from blocking applications that compete with their voice or video offerings.  But it isn’t clear if that prohibition refers to voice or video offerings over wireless broadband or extends to products (digital voice, FIOS, UVerse) that the wireless carriers offer on their wireline infrastructures.

And the draft also carves out an exception to that rule for app stores, meaning carriers can still control what apps its customers download onto their wireless devices based on whatever criteria (performance, politics, prudery) the app store operator uses.

If net neutrality needs federal legislation and federal enforcement (it does not), then this bill is certainly not the worst way to go about it.  The draft shows considerable evidence of horse-trading and of multiple cooks in the kitchen, leaves confusing holes and open questions, and, so far, doesn’t even have a name.  But at least it explicitly takes Title II reclassification of broadband Internet access, the worst idea to surface in the course of this multi-year debate, off the table.

Reading the draft makes me nostalgic for the “legislative process” course I took in law school with long-time Washington veteran Abner Mikva, who has served in all three branches over his career.  There was the official version of the legislative process—the “Schoolhouse Rock” song, for people of a certain generation—and then there was what really happened.  (See also the classic Simpsons episode featuring a helpful janitor who “resembled” Walter Mondale, “Mr. Spritz Goes to Washington.”)

Beyond the text, in other words, is the shadow theater.  Why is Waxman, a strong net neutrality supporter, leading the charge for a bill that gives up much of what the most extreme elements have demanded?  (Watch for the inevitable condemnation of Waxman that will follow the introduction of the bill, and for Tea Party opposition to any Republican support for it.)  Why has FCC Chairman Julius Genachowski expressed appreciation that Congress is working on a solution, when his own agency has in theory already developed the necessary record to proceed?  Why introduce a bill so close to adjournment, with election results so uncertain?

I have my theories, as does every other policy analyst who covers the net neutrality beat.  But predicting what will happen in politics, as opposed to technology, is a losing proposition.  So I’ll just keep watching, and trying to point out the most egregious misstatements of fact along the way.

The end of software ownership




My article for CNET this morning, “The end of software ownership…and why to smile,” looks at the important decision a few weeks ago in the Ninth Circuit copyright case, Vernor v. Autodesk.  (See also excellent blog posts on Eric Goldman’s blog. Unfortunately these posts didn’t run until after I’d finished the CNET piece.)

The CNET article took the provocative position that Vernor signals the eventual (perhaps imminent) end to the brief history of users “owning” “copies” of software that they “buy,” replacing the regime of ownership with one of rental.  And, perhaps more controversially still, I try to make the case that such a dramatic change is in fact not, as most commentators of the decision have concluded, a terrible loss for consumers but a liberating victory.

I’ll let the CNET article speak for itself.  Here I want to make a somewhat different point about the case, which is that the “ownership” regime was always an aberration, the result of an unfortunate need to rely on media to distribute code (until the Internet) coupled with a very bad decision back in 1976 to extend copyright protection to software in the first place.

The Vernor Decision, Briefly

First, a little background.

The Vernor decision, in brief, took a big step in an on-going move by the federal courts to allow licensing agreements to trump user rights reserved by the Copyright Act.  In the Vernor case, the most important of those rights was at issue:  the right to resell used copies.

Vernor, an eBay seller of general merchandise, had purchased four used copies of an older version of AutoCAD from a small architectural firm at an “office sale.”

The firm had agreed in the license agreement not to resell the software, and had reaffirmed that agreement when it upgraded its copies to a new version of the application.  Still, the firm sold the media of the old versions to Vernor, who in turn put them up for auction on eBay.

Autodesk tried repeatedly to cancel the auctions, until, when Vernor put the fourth copy up for sale, eBay temporarily suspended his account.  Vernor sued Autodesk, asking the court for a declaratory judgment (essentially a preemptive lawsuit) that as the lawful owner of a copy of AutoCAD, he had the right to resell it.

A lower court agreed with Vernor, but the Ninth Circuit reversed, and held that the so-called “First Sale Doctrine,” codified in the Copyright Act, did not apply because the architectural firm never bought a “copy” of the application.  Instead, the firm had only paid to use the software under a license from Autodesk, a license the firm had clearly violated.  Since the firm never owned the software, Vernor acquired no rights under copyright when he purchased the disks.

The Long Arm of Vernor?

This is an important decision, since all commercial software (and even open source and freeware software) is enabled by the producer only on condition of acceptance by the user of a license agreement.

These days, nearly all licenses purport to restrict the user’s ability to resell the software without permission from the producer.  (In the case of open source software under the GPL, users can redistribute the software so long as they repeat the other limits, including the requirement that modifications to the software also be distributed under the GPL.)  Thus, if the Vernor decision stands, used markets for software will quickly disappear.

Moreover, as the article points out, there’s no reason to think the decision is restricted just to software.  The three-judge panel suggested that any product—or at least any information-based product—that comes with a license agreement is in fact licensed rather than sold.  Thus, books, movies, music and video games distributed electronically in software-like formats readable by computers and other devices are probably all within the reach of the decision.

Who knows?  Perhaps Vernor could be applied to physical products—books, toasters, cars—that are conveyed via license.  Maybe before long consumers won’t own anything anymore; they’ll just get to use things, like seats at a movie theater (the classic example of a license), subject to limits imposed—and even changed at will—by the licensor.  We’ll become a nation of renters, owning nothing.

Well, not so fast.  First of all, let’s note some institutional limits of the decision.  The Ninth Circuit’s ruling applies only within federal courts of the western states (including California and Washington, where this case originated).  Other circuits facing similar questions of interpretation may reach different or even opposite decisions.

Vernor may also appeal the decision to the full Ninth Circuit or even the U.S. Supreme Court, though in both cases the decision to reconsider would be at the discretion of the respective court.  (My strong intuition is that the Supreme Court would not take an appeal on this case.)

Also, as Eric Goldman notes, the Ninth Circuit already has two other First Sale Doctrine cases in the pipeline.  Other panels of the court may take a different or more limited view.

For example, the Vernor case deals with a license that was granted by a business (Autodesk) to another business (the architectural firm).  But courts are often hesitant to enforce onerous or especially one-sided terms of a contract (a license is a kind of contract) between a business and an individual consumer.  Consumers, more than businesses, are unlikely to be able to understand the terms of an agreement, let alone have any realistic expectation of negotiating over terms they don’t like.

Courts, including the Ninth Circuit, may decline to extend the ruling to other forms of electronic content, let alone to physical goods.

The Joy of Renting

So for now let’s take the decision on its face:  Software licensing agreements that say the user is only licensing the use of software rather than purchasing a copy are enforceable.  Such agreements require only a few “magic words” (to quote the Electronic Frontier Foundation’s derisive view of the opinion) to transform software buyers into software renters.  And it’s a safe bet that any existing End User Licensing Agreements (EULAs) that don’t already recite those magic words will be quickly revised to do so.

(Besides EFF, see scathing critiques of the Vernor decision at Techdirt and Wired.)

So.  You don’t own those copies of software that you thought you purchased.  You just rent it from the vendor, on terms offered on a take-it-or-leave-it basis and subject to revision at will.  All those disks sitting in all those cardboard albums sitting on a shelf in your office are really the property of Microsoft, Intuit, Activision, and Adobe.  You don’t have to return them when the license expires, but you can’t transfer ownership of them to someone else because you don’t own them in the first place.

Well, so what?  Most of those boxes are utterly useless within a very short period of time, which is why there never has been an especially robust market for used software.  What real value is there to a copy of Windows 98, or last year’s TurboTax, or Photoshop Version 1.0?

Why does software get old so quickly, and why is old software worthless?  To answer those questions, I refer in the article to an important 2009 essay by Kevin Kelly.  Kelly, for one, thinks the prospect of renting rather than owning information content is not only wonderful but inevitable, and not because courts are being tricked into saying so.  (Kelly’s article says nothing about the legal aspects of ownership and renting.)

Renting is better for consumers, Kelly says, because ownership of information products introduces significant costs and absolutely no benefits to the consumer.  Once content is transformed into electronic formats, both the media (8-track) and the devices that play them (Betamax) grow quickly obsolete as technology improves under the neutral principle of Moore’s Law.  So if you own the media you have to store it, maintain it, catalog it and, pretty soon, replace it.  If you rent it, just as any tenant, those costs are borne by the landlord.

Consumers who own libraries of media find themselves regularly faced with the need to replace them with new media if they want to take advantage of the new features and functions of new media-interpreting devices.  You’re welcome to keep the 78’s that scratch and pop and hiss, but who really wants to?  Nostalgia only goes so far, and only for a unique subset of consumers.  Most of us like it when things get better, faster, smaller, and cheaper.

In the case of software, there’s the additional and rapid obsolescence of the code itself.  Operating systems have to be rewritten as the hardware improves and platforms proliferate.  Tax preparation software has to be replaced every year to keep up with the tax code.  Image manipulation software gets ever more sophisticated as display devices are radically improved.

Unlike a book or a piece of music, software is only written for the computer to “read” in the first place.  You can always read an old book, whether you prefer the convenience of a mass storage device such as a Kindle.  But you could never read the object code for AutoCAD even if you wanted to—the old version (which got old fast, and not just to encourage you to buy new versions) is just taking up space in your closet.

The Real Crime was Extending Copyright to Software in the First Place

In that sense, it never made any sense to “own” “copies” of software in the first place.  That was only the distribution model for a short time, necessitated by an unfortunate technical limit of computer architecture that has nearly disappeared.  CPUs require machine-readable code to be moved into RAM in order to be executed.

But core memory was expensive.  Code came loaded on cheap tape, which was then copied to more expensive disks, which was then read into even more expensive memory.  In a perfect world with unlimited free memory, the computer would have come pre-loaded with everything.

That wouldn’t have solved the obsolescence problem, however.  But the Internet solved that by eliminating the need for the physical media copies in the first place.  Nearly all the software on my computer was downloaded in the first place—if I got a disk, it was just to initiate the download and installation.  (The user manual, the other component of the software album, is only on the disk or online these days.)

As we move from physical copies to downloaded software, vendors can more easily and more quickly issue new versions, patches, upgrades, and added functionality (new levels of video games, for example).

And, as we move from physical copies to virtual copies residing in the cloud, it becomes increasingly less weird to think that the thing we paid for—the thing that’s sitting right there, in our house or office—isn’t really ours at all, even though we paid for, bagged it, transported and unwrapped it just as do all the other commodities that we do own.

That’s why the Vendor decision, in the end, isn’t really all that revolutionary.  It just acknowledges in law what has already happened in the market.  We don’t buy software.  We pay for a service—whether by the month, or by the user, or by looking at ads, or by the amount of processing or storage or whatever we do with the service—and regardless of whether the software that implements the service runs on our computer or someone else’s, or, for that matter, everyone else’s.

The crime here, if there is one, isn’t that the courts are taking away the First Sale Doctrine.  It’s not, in other words, that one piece of copyright law no longer applies to software.  The crime is that copyright—any part of it—every applied to software in the first place.  That’s what led to the culture of software “packages” and “suites” and “owning copies” that was never a good fit, and which now has become more trouble than it’s worth.

Remember that before the 1976 revisions to the Copyright Act, it was pretty clear that software wasn’t protected by copyright.  Until then, vendors (there were very few, and, of course, no consumer market) protected their source code either by delivering only object code and/or by holding user’s to the terms of contracts based on the law of trade secrets.

That regime worked just fine.  But vendors got greedy, and took the opportunity of the 1976 reforms to lobby for extension of copyright for source code.  Later, they got greedier, and chipped away at bans on applying patent law to software as well.

Not that copyright or patent protection really bought the vendors much.  Efforts to use it to protect the “look and feel” of user interfaces, as if they were novels that read too closely to an original work, fell flat.

Except when it came to stopping the wholesale reproduction and unauthorized sale of programs in other countries, copyright protection hasn’t been of much value to vendors.  And even then the real protection for software was and remains the rapid revision process driven by technological, rather than business or legal, change.

But the metaphor equating software with novels had unintended consequences.  With software protected by copyright, users—especially consumers—became accustomed to the language of copies and ownership and purchase, and to the protections of the law of sales, which applies to physical goods (books) and not to services (accounting).

So, if consumer advocates and legal scholars are enraged by the return to a purely contractual model for software use, in some sense the vendors have only themselves—or rather their predecessors—to blame.

But that doesn’t change the fact that software never fit the model of copyright, including the First Sale Doctrine.  Just because source code kind of sort of looked like it was written in a language readable by a very few humans, the infamous CONTU Committee making recommendations to Congress made the leap to treating software as a work of authorship by (poor) analogy.

With the 1976 Copyright Act, the law treated software as if it were a novel, giving exclusive rights to its “authors” for a period of time that is absurd compared to the short economic lifespan of any piece of code written since the time of Charles Baggage and Ada Lovelace.

The farther away from a traditional “work of authorship” that software evolves (visual programming, object-oriented architecture, interpretive languages such as HTML), the more unfortunate that decision looks in retrospect.  Source code is just a convenience, making it easier to write and maintain programs.  But it doesn’t do anything.  It must be compiled or interpreted before the hardware will make a peep or move a pixel.

Author John Hersey, one of the CONTU Committee members, got it just right.  In his dissent from the recommendation to extend copyright to software, Hersey wrote, “software utters work.  Work is its only utterance and its only purpose.”

Work doesn’t need the incentives and protections we have afforded to novels and songs.  And consumers can no more resell work than they can take home their seat from the movie theater after the show.

Media updates for August

August is usually a quiet month for everything, especially technology policy. But a number of significant developments this summer, including intensive negotiations over Net Neutrality at the FCC and elsewhere, kept Larry busy with media queries and articles. We’ve added a dozen new posts to the Media Page for August alone. The accidents continue to pile up at the dangerous intersection of technology and policy, the theme of The Laws of Disruption.

Larry’s CNET article on the proposed Google-Verizon Framework for Net Neutrality legislation, which highlighted the cynicism of those attacking aspects of the proposal that were identical to features of the FCC’s own proposal last October, was widely reprinted and quoted, including a top-of-the-page run of nearly a day on Techmeme.  An earlier blog post exploring hidden dangers of the FCC’s proposal to “reclassify” broadband Internet was expanded and published as a white paper by The Progress and Freedom Foundation.

Other leading developments include the surprise decision by the U.S. Copyright Office to grant an exemption from the Digital Millennium Copyright Act for iPhone users who “jailbreak” their phones and upload unauthorized apps or move to a different network.  See commentary at Techdirt.

Larry moderated two panels at the Privacy Identity and Innovation conference in Seattle, leading to a long blog post on the current state of the privacy debate.

And Microsoft co-founder Paul Allen’s surprise decision to try enforcing key patents against leading Internet economy companies brought the sorry state of the U.S. patent system back to the front burner.  Larry’s blog on the lawsuit led to two articles over at Techdirt.

Over the summer, Larry continued writing not only for this blog but for the Technology Liberation Front, the Stanford Law School Center for Internet & Society, and for CNET. He also published op-eds in The San Francisco Chronicle, The San Jose Mercury News and public radio’s “Future Tense.”  He appeared on CNET Live’s “Reporter’s Roundtable” to sum up a week’s worth of net neutrality headlines.

Larry was also interviewed and quoted in a wide range of business and mainstream publications, including The Wall Street Journal and Time magazine.

Paul Allen: When a Patent Troll is an Enigma

I don’t have a great deal to add to coverage of last week’s big patent story, which concerned the filing of a complaint by Microsoft co-founder Paul Allen against major technology companies including Apple, Google, Facebook and Yahoo. Diane Searcey of The Wall Street Journal, Tom Krazit at CNET News.com, and Mike Masnick on Techdirt pretty much lay out as much as is known so far.

But given the notoriety of the case and the scope of its claims (the Journal, or at least its headline writer, has declared an all-out “patent war”), it seems like a good opportunity to dispel some common myths about the patent system and its discontents.

And then I want to offer one completely unfounded theory about what is really going on that no one yet has suggested. Which is: Paul Allen is out to become the greatest champion that patent reform will ever know.

Continue reading