Although last Friday I got my last package from Amazon, I’m not still reading any of those books, but Jonathan Zittrain‘s The future of the Internet. And how to stop it. Disquieting, insightful, extremely interesting.
Here’s the last I’ve read from it:
The situation for online copyright illustrates that for perfect enforcement to work, generative alternatives must not be widely available. In 2007, the movie industry and technology makers unveiled a copy protection scheme for new high-definition DVDs to correct the flaws in the technical protection measures applied to regular DVDs over a decade earlier. The new system was compromised just as quickly; instructions quickly circulated describing how PC users could disable the copy protection on HD-DVDs. So long as the generative PC remains at the center of the modern information ecosystem, the ability to deploy trusted systems with restrictions that interfere with user expectations is severely limited: tighten a screw too much, and it will become stripped.
So could the generative PC ever really disappear? As David Post wrote in response to a law review article that was a precursor to this book, “a grid of 400 million open PCs is not less generative than a grid of 400 million open PCs and 500 million locked-down TiVos.” Users might shift some of their activities to tethered appliances in response to the security threats described in Chapter Three, and they might even find themselves using locked-down PCs at work or in libraries and Internet cafés. But why would they abandon the generative PC at home? The prospect may be found in “Web 2.0.” As mentioned earlier, in part this label refers to generativity at the content layer, on sites like Wikipedia and Flickr, where content is driven by users. But it also refers to something far more technical—a way of building Web sites so that users feel less like they are looking at Web pages and more like they are using applications on their very own PCs. New online map services let users click to grasp a map section and move it around; new Internet mail services let users treat their online e-mail repositories as if they were located on their PCs. Many of these technologies might be thought of as technologically generative because they provide hooks for developers from one Web site to draw upon the content and functionality of another—at least if the one lending the material consents.
Yet the features that make tethered appliances worrisome—that they are less generative and that they can be so quickly and effectively regulated—apply with equal force to the software that migrates to become a service offered over the Internet. Consider Google’s popular map service. It is not only highly useful to end users; it also has an open API (application programming interface) to its map data, which means that a third-party Web site creator can start with a mere list of street addresses and immediately produce on her site a Google Map with a digital push-pin at each address. This allows any number of “mash-ups” to be made, combining Google Maps with third-party geographic datasets. Internet developers are using the Google Maps API to create Web sites that find and map the nearest Starbucks, create and measure running routes, pinpoint the locations of traffic light cameras, and collate candidates on dating sites to produce instant displays of where one’s best matches can be found.
Because it allows coders access to its map data and functionality, Google’s mapping service is generative. But it is also contingent: Google assigns each Web developer a key and reserves the right to revoke that key at any time, for any reason—or to terminate the whole Google Maps service.It is certainly understandable that Google, in choosing to make a generative service out of something in which it has invested heavily, would want to control it. But this puts within the control of Google, and anyone who can regulate Google, all downstream uses of Google Maps—and maps in general, to the extent that Google Maps’ popularity means other mapping services will fail or never be built.
Software built on open APIs that can be withdrawn is much more precarious than software built under the old PC model, where users with Windows could be expected to have Windows for months or years at a time, whether or not Microsoft wanted them to keep it. To the extent that we find ourselves primarily using a particular online service, whether to store our documents, photos, or buddy lists, we may find switching to a new service more difficult, as the data is no longer on our PCs in a format that other software can read. This disconnect can make it more difficult for third parties to write software that interacts with other software, such as desktop search engines that can currently paw through everything on a PC in order to give us a unified search across a hard drive. Sites may also limit functionality that the user expects or assumes will be available. In 2007, for example, MySpace asked one of its most popular users to remove from her page a piece of music promotion software that was developed by an outside company. She was using it instead of MySpace’s own code. Google unexpectedly closed its unsuccessful Google Video purchasing service and remotely disabled users’ access to content they had purchased; after an outcry, Google offered limited refunds instead of restoring access to the videos.
Continuous Internet access thus is not only facilitating the rise of appliances and PCs that can phone home and be reconfigured by their vendors at any moment. It is also allowing a wholesale shift in code and activities from endpoint PCs to the Web. There are many functional advantages to this, at least so long as one’s Internet connection does not fail. When users can read and compose e-mail online, their inboxes and outboxes await no matter whose machines they borrow—or what operating system the machines have—so long as they have a standard browser. It is just a matter of getting to the right Web site and logging in. We are beginning to be able to use the Web to do word processing, spreadsheet analyses, indeed, nearly anything we might want to do.
Once the endpoint is consigned to hosting only a browser, with new features limited to those added on the other end of the browser’s window, consumer demand for generative PCs can yield to demand for boxes that look like PCs but instead offer only that browser. Then, as with tethered appliances, when Web 2.0 services change their offerings, the user may have no ability to keep using an older version, as one might do with software that stops being actively made available.
This is an unfortunate transformation. It is a mistake to think of the Web browser as the apex of the PC’s evolution, especially as new peer-to-peer applications show that PCs can be used to ease network traffic congestion and to allow people directly to interact in new ways. Just as those applications are beginning to show promise—whether as ad hoc networks that PCs can create among each other in the absence of connectivity to an ISP, or as distributed processing and storage devices that could apply wasted computing cycles to far- away computational problems —there is less reason for those shopping for a PC to factor generative capacity into a short-term purchasing decision. As a 2007 Wall Street Journal headline put it: “‘Dumb terminals can be a smart move’: Computing devices lack extras but offer security, cost savings.”
* * *
Generative networks like the Internet can be partially controlled, and there is important work to be done to enumerate the ways in which governments try to censor the Net. But the key move to watch is a sea change in control over the endpoint: lock down the device, and network censorship and control can be extraordinarily reinforced. The prospect of tethered appliances and software as service permits major regulatory intrusions to be implemented as minor technical adjustments to code or requests to service providers. Generative technologies ought to be given wide latitude to find a variety of uses—including ones that encroach upon other interests. These encroachments may be undesirable, but they may also create opportunities to reconceptualize the rights underlying the threatened traditional markets and business models. An information technology environment capable of recursive innovation in the realms of business, art, and culture will best thrive with continued regulatory forbearance, recognizing that the disruption occasioned by generative information technology often amounts to a long-term gain even as it causes a short-term threat to some powerful and legitimate interests.
The generative spirit allows for all sorts of software to be built, and all sorts of content to be exchanged, without anticipating what markets want—or what level of harm can arise. The development of much software today, and thus of the generative services facilitated at the content layer of the Internet, is undertaken by disparate groups, often not acting in concert, whose work can become greater than the sum of its parts because it is not funneled through a single vendor’s development cycle.
The keys to maintaining a generative system are to ensure its internal security without resorting to lockdown, and to find ways to enable enough enforcement against its undesirable uses without requiring a system of perfect enforcement. The next chapters explore how some enterprises that are generative at the content level have managed to remain productive without requiring extensive lockdown or external regulation, and apply those lessons to the future of the Internet.