Doc / Punditry / Peer to Peer

New Hype for an Old Idea?

Contrary to what you might have been led to believe by an onslaught of recent hype and press coverage, ``peer-to-peer'' networking is nothing new. In fact, in a very real sense, it's as old as the Arpanet.

A little history

From the very beginning, the Arpanet (the predecessor to the Internet, back in the days when you could still count all the nodes in the world on the fingers of both hands without resorting to binary) was essentially symmetrical. To be sure, there were Terminal Interface Processors, called ``TIP's'', that did nothing but provide telnet access for a big batch of terminals. But hosts were ``peers'' in the fullest sense. Any host could initiate a connection with any other host, using any protocol.

There might, at any one time, be several telnet connections from Stanford to MIT, and several more from MIT to Stanford. (They might even have all been connected in one long chain.) There might be a couple of ftp connections thrown in, in each direction. Of course, each individual connection had a ``client'' and a ``server'' end. But the hosts, the computers, that marked the endpoints were equals: peers.

When e-mail and news came along, riding on top of a protocol called ``uucp'', they too were peer-to-peer. Mail servers and news servers exchanged messages with one another. News messages propagated from one end of the net to the other without centralized control: each server sent every incoming message to all of its neighbors except the one it came from.

More recently, the X window system and Sun's Network File System (NFS) were developed. NFS is purely peer-to-peer; any computer can export (serve) files, any computer can mount them. A similar notion is seen in Microsoft file sharing -- it's nothing new. X is particularly interesting: a desktop computer runs an X server that then provides display, keyboard, and mouse services for client applications running elsewhere in the network.

A Recent Aberration

The World Wide Web has, for the brief time of its existance, distorted this simple view of the network. The overwhelming majority of computers on the Internet are ``clients'' that talk, via HTTP, FTP, IMAP, and POP3, with a smaller number of ``servers.''

As a result of the Web, an entire decade's worth of users has experienced computer networking without ever realizing that the machine on their desk was fully as capable a ``host'' as anything on the Arpanet twenty years before. They have pointed and clicked and exchanged e-mail and printed on shared printers without ever noticing that these ``servers'' were running on computers little different from their own.

An adventurous few, indeed, may have turned on ``file sharing'', only to be told by their IT department or their cable service provider that this opens a massive security hole. But Windows makes it difficult to share files, and especially to share applications -- God forbid that two users might run the same copy of Microsoft Office at the same time from the same server. Or at least Bill forbids it. (This isn't quite true -- Windows file sharing isn't all that difficult. But it's not safe: it's roughly equivalent to leaving your car unlocked with the keys in the ignition.)

Of course, almost everyone with an Internet connection has at least a few megabytes of web space at their ISP's to play with, on a server ``out there'' someplace. But this merely reinforces the idea that servers are big, complicated machines running arcane operating systems and magical server software, and don't have anything to do with mere PC's.

Vendors, and especially Microsoft, are of course perfectly happy to go along with the mythmaking and the hype. They are now preparing to roll out ``peer-to-peer'' networking as if it was something wonderful and new, wrapping layers of proprietary protocols and arcane API's around the fundamentally trivial idea that any computer on a network can get information from, and send information to, any other computer.

Back to the House of Peers

It is only with the recent rise of Linux and other open-source Unices that people -- ordinary users -- are rediscovering the power of peer-to-peer networking. Mostly they are doing it at home, safely behind their personal firewalls. Every major Linux distribution ships with a wide variety of traditional servers: mail, news, ftp, telnet, web. It ships with NFS, too, and of course X. And samba, so that even Windows computers can join in the fun, sharing files and printers almost like the big boys.

More and more people are discovering that the Linux machines on their desktops are running Apache. Just like their hosted website -- maybe they can set up a mirror of it so they can preview pages before they go public. Maybe they can set up a little intranet -- just a family calendar and a directory for each family member. Maybe they send each other e-mail. Certainly they'll share their internet connection and their printer, even with the Windows machines and Mac's. If they're really adventurous, maybe they'll set up a WikiWiki server or a MUD.

Suddenly, people are going to look around and wonder why, when every machine in their house has multiple gigabytes of disk and compute capacity to rival a Cray, their main server has a load average of 5 and /home is 90% full. Surely there must be a way to put all those unused cycles to good use. Surely there must be a way to use all that disk space. Why not... share it!

Killer Apps

Usenet news and e-mail were the peer-to-peer ``killer applications'' for the early Internet. Everyone used them. The Web was the next killer app -- it made the entire Internet into one huge interconnected sea of documents, accessible to anyone just by clicking on links. What will be the ``killer app'' for the new age of peer-to-peer?

Many people would quickly answer ``Napster'', but I think this misses the mark. Napster was basically a specialized index site coupled to a simple client-server combo that made it easy to share (i.e. serve and download) a particular type of file. But this is nothing new; it's just a restricted, dumbed-down version of the Web. Let's keep looking.

What do people really want to do?

  1. They want to talk to one another.
  2. They want to play games. With other people. In a rich virtual environment.
  3. They want to share things, especially things they've created: stories, songs, pictures, programs, worlds.
  4. (And oh, yes, they'd also like to find some use for all that unused disk space and all those idle cycles, and they'd really be happy if every stupid Windows crash and power failure didn't take out a couple of their favorite files or their carefully-tweaked desktop configuration along with it.)

The network of the future will consist of a layer of rich, highly-interactive shared environments -- worlds -- on top of a substrate of distributed storage and computation. In the end, every computer will be what it was at the dawn of the Net: a peer, both a server and a client as needed.

The difference, if there is one, is that more programs will be peers as well. Even now, the programs that make up Microsoft Office are peers of a rudimentary sort: they serve pieces of document for other programs to ``embed.'' CORBA-based environments like Gnome on Linux take this even farther, swapping objects freely with their peers. Web servers pass requests on to back-end resources like databases.

In the future, this will go farther. Web browsers will, like the X window system, offer display and layout services. Text editors will offer their services -- and advanced features -- for filling in text boxes in web forms. Calendars and address books on pocket computers will serve their contents via short-range wireless networking. 3D rendering engines will happily render polygons on the screen for games and collaborative office environments alike.

Where do we go from here?

How will we get from the limited and underutilized peer-to-peer world of the present to the ubiquitous and fully-connected peer-to-peer world of the future? Interactive worlds and distributed computation and data.

Interactive Worlds

There is never going to be a single, unified, Cyberspace -- there's room for a nearly infinite number of worlds. There's no reason why I, or at least my virtual representative, shouldn't be able to step through a hollow tree on Middle Earth and find myself on the red plains of Mars, still holding the copy of Moby Dick I picked up in that little bookstore in San Francisco.

Computation, storage, and bandwidth are all getting cheaper, but bandwidth isn't anywhere close to keeping up with the other two. In order to make the virtual universe work, everyone's virtual ``neighborhood'' will have to be running locally. When you step through a gateway, your local machine will pull down a description from the server that ''owns'' it, and in turn will serve the data that describes you and what you are doing. If you want to build your own piece of a world, anything from a cottage to a castle to a cluster of galaxies, your local machine will serve its description, too. Peer-to-peer.

Distributed Computation and Data

Computation and data are, of course, already distributed. When I make an HTTP request in my browser, I'm invoking computational resources on a server somewhere that retrieves (or possibly computes) data and sends it to me. I do the same when I open a file mounted over NFS. Data isn't a problem: protocols like NFS and WebDAV make it easy to access data from anywhere, and markup with XML allows that data to be highly structured, and to keep its structure as it moves across the wire. The problem is that the software environment is highly fragmented: every server is an island, and every server is different.

Interpreted languages like Java and Squeak make it feasible to distribute software, but that's really a red herring -- ask yourself whether you really want other people sending software agents to run on your bank's computer. Fortunately, you don't need to have mobile programs as long as you can rely on standard interfaces. ``A chunk of data packaged up with the operations you perform on it'' is not just the standard description of an object (in the Java or Smalltalk sense); it describes a server just as well. Just as the HTTP protocol makes it easy to treat the a server as a collection of data resources, CGI scripts and other active documents make it easy to treat it as a collection of objects.

Protocols like CORBA and, more recently, SOAP, along with their associated interface description languages, allow anyone to publish an interface for their computer, and anyone else to use that interface in a program. Again, it's peer-to-peer.

Forget about huge corporations selling ``web services'' on their megaservers. There will be a little of that, sure, but most peer-to-peer services are going to be between machines in your workgroup, your cubicle, or (via Bluetooth, for example), your briefcase and your pockets. When your cell phone needs a friend's number it will go, not to some big central directory, but to the address book in your PDA or on your home PC. Maybe your home PC will go to your friend's website. No huge servers providing expensive services to hapless customers, just a seamless web of peers, talking to one another.


$Id: p2p.html,v 1.4 2002/12/15 17:23:30 steve Exp $
Stephen R. Savitzky <steve@theStarport.org>