Monthly Archives: September 2010

The Net Neutrality Sausage Factory Ramps up Production

My article for CNET News.com this morning analyzes the “leaked” net neutrality bill from Rep. Henry Waxman, chair of the House Energy and Commerce Committee.  I put leaked in quotes because so many sources came up with this document yesterday that its escape from the secrecy of the legislative process hardly seems dramatic.  Reporters with sources inside Waxman’s office, including The Hill and The Washington Post, expect Waxman to introduce the bill sometime this week.

The CNET article goes through the bill in some detail, and I won’t duplicate the analysis here.  It is a relatively short piece of legislation that makes limited changes to Title I of the Communications Act, giving the FCC only the authority it needs to implement “core” regulations that would allow the agency to enforce violations of the open Internet principles.

With a few notable exceptions, the Waxman bill mirrors both the FCC’s own proposed rulemaking from last October (still pending) as well as the Google-Verizon legislative framework the two companies released in July.  All three begin with the basic open Internet rules that originate with the FCC’s 2005 policy statement, with some version of a content nondiscrimination rule and a transparency rule added.    (There’s considerable controversy over the wording of the nondiscrimination rule; none about transparency.)

The Waxman draft would sunset at the end of 2012.  And it asks the FCC to report to Congress sooner than that if any additional provisions are required to implement key features of the National Broadband Plan, which has sadly been lost to the tempest-in-a-teapot wrangling over net neutrality before and since it was published.

Many commentators who are already condemning Waxman for selling out “real” net neutrality are upset over provisions—including the sensible call for case-by-case adjudication of complaints rather than the fool’s errand of developing detailed rules and regulations in advance—that appear in all three documents.  They either don’t know or don’t care that in that regard Waxman’s bill breaks no new ground.

Where the bill differs most is its treatment of wireless broadband.  The FCC’s October NPRM, albeit with reservations, ultimately concluded that wireless should not be treated any differently than wireline broadband.  Google-Verizon reached the opposite conclusion, encouraging lawmakers to leave wireless broadband out of any new rules, at least until the market and the infrastructure that supports it become more stable.

Waxman’s draft calls for limited application of the rules to wireless broadband, in particular prohibiting carriers from blocking applications that compete with their voice or video offerings.  But it isn’t clear if that prohibition refers to voice or video offerings over wireless broadband or extends to products (digital voice, FIOS, UVerse) that the wireless carriers offer on their wireline infrastructures.

And the draft also carves out an exception to that rule for app stores, meaning carriers can still control what apps its customers download onto their wireless devices based on whatever criteria (performance, politics, prudery) the app store operator uses.

If net neutrality needs federal legislation and federal enforcement (it does not), then this bill is certainly not the worst way to go about it.  The draft shows considerable evidence of horse-trading and of multiple cooks in the kitchen, leaves confusing holes and open questions, and, so far, doesn’t even have a name.  But at least it explicitly takes Title II reclassification of broadband Internet access, the worst idea to surface in the course of this multi-year debate, off the table.

Reading the draft makes me nostalgic for the “legislative process” course I took in law school with long-time Washington veteran Abner Mikva, who has served in all three branches over his career.  There was the official version of the legislative process—the “Schoolhouse Rock” song, for people of a certain generation—and then there was what really happened.  (See also the classic Simpsons episode featuring a helpful janitor who “resembled” Walter Mondale, “Mr. Spritz Goes to Washington.”)

Beyond the text, in other words, is the shadow theater.  Why is Waxman, a strong net neutrality supporter, leading the charge for a bill that gives up much of what the most extreme elements have demanded?  (Watch for the inevitable condemnation of Waxman that will follow the introduction of the bill, and for Tea Party opposition to any Republican support for it.)  Why has FCC Chairman Julius Genachowski expressed appreciation that Congress is working on a solution, when his own agency has in theory already developed the necessary record to proceed?  Why introduce a bill so close to adjournment, with election results so uncertain?

I have my theories, as does every other policy analyst who covers the net neutrality beat.  But predicting what will happen in politics, as opposed to technology, is a losing proposition.  So I’ll just keep watching, and trying to point out the most egregious misstatements of fact along the way.

The end of software ownership




My article for CNET this morning, “The end of software ownership…and why to smile,” looks at the important decision a few weeks ago in the Ninth Circuit copyright case, Vernor v. Autodesk.  (See also excellent blog posts on Eric Goldman’s blog. Unfortunately these posts didn’t run until after I’d finished the CNET piece.)

The CNET article took the provocative position that Vernor signals the eventual (perhaps imminent) end to the brief history of users “owning” “copies” of software that they “buy,” replacing the regime of ownership with one of rental.  And, perhaps more controversially still, I try to make the case that such a dramatic change is in fact not, as most commentators of the decision have concluded, a terrible loss for consumers but a liberating victory.

I’ll let the CNET article speak for itself.  Here I want to make a somewhat different point about the case, which is that the “ownership” regime was always an aberration, the result of an unfortunate need to rely on media to distribute code (until the Internet) coupled with a very bad decision back in 1976 to extend copyright protection to software in the first place.

The Vernor Decision, Briefly

First, a little background.

The Vernor decision, in brief, took a big step in an on-going move by the federal courts to allow licensing agreements to trump user rights reserved by the Copyright Act.  In the Vernor case, the most important of those rights was at issue:  the right to resell used copies.

Vernor, an eBay seller of general merchandise, had purchased four used copies of an older version of AutoCAD from a small architectural firm at an “office sale.”

The firm had agreed in the license agreement not to resell the software, and had reaffirmed that agreement when it upgraded its copies to a new version of the application.  Still, the firm sold the media of the old versions to Vernor, who in turn put them up for auction on eBay.

Autodesk tried repeatedly to cancel the auctions, until, when Vernor put the fourth copy up for sale, eBay temporarily suspended his account.  Vernor sued Autodesk, asking the court for a declaratory judgment (essentially a preemptive lawsuit) that as the lawful owner of a copy of AutoCAD, he had the right to resell it.

A lower court agreed with Vernor, but the Ninth Circuit reversed, and held that the so-called “First Sale Doctrine,” codified in the Copyright Act, did not apply because the architectural firm never bought a “copy” of the application.  Instead, the firm had only paid to use the software under a license from Autodesk, a license the firm had clearly violated.  Since the firm never owned the software, Vernor acquired no rights under copyright when he purchased the disks.

The Long Arm of Vernor?

This is an important decision, since all commercial software (and even open source and freeware software) is enabled by the producer only on condition of acceptance by the user of a license agreement.

These days, nearly all licenses purport to restrict the user’s ability to resell the software without permission from the producer.  (In the case of open source software under the GPL, users can redistribute the software so long as they repeat the other limits, including the requirement that modifications to the software also be distributed under the GPL.)  Thus, if the Vernor decision stands, used markets for software will quickly disappear.

Moreover, as the article points out, there’s no reason to think the decision is restricted just to software.  The three-judge panel suggested that any product—or at least any information-based product—that comes with a license agreement is in fact licensed rather than sold.  Thus, books, movies, music and video games distributed electronically in software-like formats readable by computers and other devices are probably all within the reach of the decision.

Who knows?  Perhaps Vernor could be applied to physical products—books, toasters, cars—that are conveyed via license.  Maybe before long consumers won’t own anything anymore; they’ll just get to use things, like seats at a movie theater (the classic example of a license), subject to limits imposed—and even changed at will—by the licensor.  We’ll become a nation of renters, owning nothing.

Well, not so fast.  First of all, let’s note some institutional limits of the decision.  The Ninth Circuit’s ruling applies only within federal courts of the western states (including California and Washington, where this case originated).  Other circuits facing similar questions of interpretation may reach different or even opposite decisions.

Vernor may also appeal the decision to the full Ninth Circuit or even the U.S. Supreme Court, though in both cases the decision to reconsider would be at the discretion of the respective court.  (My strong intuition is that the Supreme Court would not take an appeal on this case.)

Also, as Eric Goldman notes, the Ninth Circuit already has two other First Sale Doctrine cases in the pipeline.  Other panels of the court may take a different or more limited view.

For example, the Vernor case deals with a license that was granted by a business (Autodesk) to another business (the architectural firm).  But courts are often hesitant to enforce onerous or especially one-sided terms of a contract (a license is a kind of contract) between a business and an individual consumer.  Consumers, more than businesses, are unlikely to be able to understand the terms of an agreement, let alone have any realistic expectation of negotiating over terms they don’t like.

Courts, including the Ninth Circuit, may decline to extend the ruling to other forms of electronic content, let alone to physical goods.

The Joy of Renting

So for now let’s take the decision on its face:  Software licensing agreements that say the user is only licensing the use of software rather than purchasing a copy are enforceable.  Such agreements require only a few “magic words” (to quote the Electronic Frontier Foundation’s derisive view of the opinion) to transform software buyers into software renters.  And it’s a safe bet that any existing End User Licensing Agreements (EULAs) that don’t already recite those magic words will be quickly revised to do so.

(Besides EFF, see scathing critiques of the Vernor decision at Techdirt and Wired.)

So.  You don’t own those copies of software that you thought you purchased.  You just rent it from the vendor, on terms offered on a take-it-or-leave-it basis and subject to revision at will.  All those disks sitting in all those cardboard albums sitting on a shelf in your office are really the property of Microsoft, Intuit, Activision, and Adobe.  You don’t have to return them when the license expires, but you can’t transfer ownership of them to someone else because you don’t own them in the first place.

Well, so what?  Most of those boxes are utterly useless within a very short period of time, which is why there never has been an especially robust market for used software.  What real value is there to a copy of Windows 98, or last year’s TurboTax, or Photoshop Version 1.0?

Why does software get old so quickly, and why is old software worthless?  To answer those questions, I refer in the article to an important 2009 essay by Kevin Kelly.  Kelly, for one, thinks the prospect of renting rather than owning information content is not only wonderful but inevitable, and not because courts are being tricked into saying so.  (Kelly’s article says nothing about the legal aspects of ownership and renting.)

Renting is better for consumers, Kelly says, because ownership of information products introduces significant costs and absolutely no benefits to the consumer.  Once content is transformed into electronic formats, both the media (8-track) and the devices that play them (Betamax) grow quickly obsolete as technology improves under the neutral principle of Moore’s Law.  So if you own the media you have to store it, maintain it, catalog it and, pretty soon, replace it.  If you rent it, just as any tenant, those costs are borne by the landlord.

Consumers who own libraries of media find themselves regularly faced with the need to replace them with new media if they want to take advantage of the new features and functions of new media-interpreting devices.  You’re welcome to keep the 78’s that scratch and pop and hiss, but who really wants to?  Nostalgia only goes so far, and only for a unique subset of consumers.  Most of us like it when things get better, faster, smaller, and cheaper.

In the case of software, there’s the additional and rapid obsolescence of the code itself.  Operating systems have to be rewritten as the hardware improves and platforms proliferate.  Tax preparation software has to be replaced every year to keep up with the tax code.  Image manipulation software gets ever more sophisticated as display devices are radically improved.

Unlike a book or a piece of music, software is only written for the computer to “read” in the first place.  You can always read an old book, whether you prefer the convenience of a mass storage device such as a Kindle.  But you could never read the object code for AutoCAD even if you wanted to—the old version (which got old fast, and not just to encourage you to buy new versions) is just taking up space in your closet.

The Real Crime was Extending Copyright to Software in the First Place

In that sense, it never made any sense to “own” “copies” of software in the first place.  That was only the distribution model for a short time, necessitated by an unfortunate technical limit of computer architecture that has nearly disappeared.  CPUs require machine-readable code to be moved into RAM in order to be executed.

But core memory was expensive.  Code came loaded on cheap tape, which was then copied to more expensive disks, which was then read into even more expensive memory.  In a perfect world with unlimited free memory, the computer would have come pre-loaded with everything.

That wouldn’t have solved the obsolescence problem, however.  But the Internet solved that by eliminating the need for the physical media copies in the first place.  Nearly all the software on my computer was downloaded in the first place—if I got a disk, it was just to initiate the download and installation.  (The user manual, the other component of the software album, is only on the disk or online these days.)

As we move from physical copies to downloaded software, vendors can more easily and more quickly issue new versions, patches, upgrades, and added functionality (new levels of video games, for example).

And, as we move from physical copies to virtual copies residing in the cloud, it becomes increasingly less weird to think that the thing we paid for—the thing that’s sitting right there, in our house or office—isn’t really ours at all, even though we paid for, bagged it, transported and unwrapped it just as do all the other commodities that we do own.

That’s why the Vendor decision, in the end, isn’t really all that revolutionary.  It just acknowledges in law what has already happened in the market.  We don’t buy software.  We pay for a service—whether by the month, or by the user, or by looking at ads, or by the amount of processing or storage or whatever we do with the service—and regardless of whether the software that implements the service runs on our computer or someone else’s, or, for that matter, everyone else’s.

The crime here, if there is one, isn’t that the courts are taking away the First Sale Doctrine.  It’s not, in other words, that one piece of copyright law no longer applies to software.  The crime is that copyright—any part of it—every applied to software in the first place.  That’s what led to the culture of software “packages” and “suites” and “owning copies” that was never a good fit, and which now has become more trouble than it’s worth.

Remember that before the 1976 revisions to the Copyright Act, it was pretty clear that software wasn’t protected by copyright.  Until then, vendors (there were very few, and, of course, no consumer market) protected their source code either by delivering only object code and/or by holding user’s to the terms of contracts based on the law of trade secrets.

That regime worked just fine.  But vendors got greedy, and took the opportunity of the 1976 reforms to lobby for extension of copyright for source code.  Later, they got greedier, and chipped away at bans on applying patent law to software as well.

Not that copyright or patent protection really bought the vendors much.  Efforts to use it to protect the “look and feel” of user interfaces, as if they were novels that read too closely to an original work, fell flat.

Except when it came to stopping the wholesale reproduction and unauthorized sale of programs in other countries, copyright protection hasn’t been of much value to vendors.  And even then the real protection for software was and remains the rapid revision process driven by technological, rather than business or legal, change.

But the metaphor equating software with novels had unintended consequences.  With software protected by copyright, users—especially consumers—became accustomed to the language of copies and ownership and purchase, and to the protections of the law of sales, which applies to physical goods (books) and not to services (accounting).

So, if consumer advocates and legal scholars are enraged by the return to a purely contractual model for software use, in some sense the vendors have only themselves—or rather their predecessors—to blame.

But that doesn’t change the fact that software never fit the model of copyright, including the First Sale Doctrine.  Just because source code kind of sort of looked like it was written in a language readable by a very few humans, the infamous CONTU Committee making recommendations to Congress made the leap to treating software as a work of authorship by (poor) analogy.

With the 1976 Copyright Act, the law treated software as if it were a novel, giving exclusive rights to its “authors” for a period of time that is absurd compared to the short economic lifespan of any piece of code written since the time of Charles Baggage and Ada Lovelace.

The farther away from a traditional “work of authorship” that software evolves (visual programming, object-oriented architecture, interpretive languages such as HTML), the more unfortunate that decision looks in retrospect.  Source code is just a convenience, making it easier to write and maintain programs.  But it doesn’t do anything.  It must be compiled or interpreted before the hardware will make a peep or move a pixel.

Author John Hersey, one of the CONTU Committee members, got it just right.  In his dissent from the recommendation to extend copyright to software, Hersey wrote, “software utters work.  Work is its only utterance and its only purpose.”

Work doesn’t need the incentives and protections we have afforded to novels and songs.  And consumers can no more resell work than they can take home their seat from the movie theater after the show.

Media updates for August

August is usually a quiet month for everything, especially technology policy. But a number of significant developments this summer, including intensive negotiations over Net Neutrality at the FCC and elsewhere, kept Larry busy with media queries and articles. We’ve added a dozen new posts to the Media Page for August alone. The accidents continue to pile up at the dangerous intersection of technology and policy, the theme of The Laws of Disruption.

Larry’s CNET article on the proposed Google-Verizon Framework for Net Neutrality legislation, which highlighted the cynicism of those attacking aspects of the proposal that were identical to features of the FCC’s own proposal last October, was widely reprinted and quoted, including a top-of-the-page run of nearly a day on Techmeme.  An earlier blog post exploring hidden dangers of the FCC’s proposal to “reclassify” broadband Internet was expanded and published as a white paper by The Progress and Freedom Foundation.

Other leading developments include the surprise decision by the U.S. Copyright Office to grant an exemption from the Digital Millennium Copyright Act for iPhone users who “jailbreak” their phones and upload unauthorized apps or move to a different network.  See commentary at Techdirt.

Larry moderated two panels at the Privacy Identity and Innovation conference in Seattle, leading to a long blog post on the current state of the privacy debate.

And Microsoft co-founder Paul Allen’s surprise decision to try enforcing key patents against leading Internet economy companies brought the sorry state of the U.S. patent system back to the front burner.  Larry’s blog on the lawsuit led to two articles over at Techdirt.

Over the summer, Larry continued writing not only for this blog but for the Technology Liberation Front, the Stanford Law School Center for Internet & Society, and for CNET. He also published op-eds in The San Francisco Chronicle, The San Jose Mercury News and public radio’s “Future Tense.”  He appeared on CNET Live’s “Reporter’s Roundtable” to sum up a week’s worth of net neutrality headlines.

Larry was also interviewed and quoted in a wide range of business and mainstream publications, including The Wall Street Journal and Time magazine.

Techdirt/Paul Allen

“Is Paul Allen’s patent madness really an attempt to show the madness of patents?”, Techdirt, August 30, 2010. Larry’s blog post on Paul Allen’s patent lawsuit against leading Internet companies set off a minor firestorm. Mike Masnick of Techdirt doubted Larry’s speculation that Allen was really just showing up how broken the patent system is. In a later post, also on Techdirt, Masnick reviewed Larry’s comments about the particular problem of jury trials for patent infringement, especially now that the patent office approves pretty much everything. See also Larry Ebert’s comments on IPBiz.