The Era of Fragmentation, Part 1: Load Factor

By the early 1980s, the roots of what we know now as the Internet had been established – its basic protocols designed and battle-tested in real use – but it remained a closed system almost entirely under the control of a single entity, the U.S. Department of Defense. Soon that would change, as it expanded to academic computer science departments across the U.S. with CSNET. It would continue to grow from there within academia, before finally opening to general commercial use in the 1990s.

But that the Internet would become central to the coming digital world, the much touted “information society,” was by no means obvious circa 1980. Even for those who had heard of it, it remained little more than a very promising academic experiment. The rest of the world did not stand still, waiting with bated breath for its arrival. Instead, many different visions for bringing online services to the masses competed for money and attention.

Personal Computing

By about 1975, advances in semiconductor manufacturing had made possible a new kind of computer. At few years prior, engineers had figured out how to pack the core processing logic of a computer onto a single microchip – a microprocessor. Companies such as Intel began to offer high-speed short-term memory on chips as well, to replace the magnetic core memory of previous generations of computers. This brought the most central and expensive parts of the computer under the sway of Moore’s Law, which, in turn, drove the unit price of chip-based computing and memory relentlessly downward for decades to come. By the middle of the decade, this process had already brought the price of these components low enough that a reasonably comfortable middle-class American might consider buying and building a computer of his or her own. Such machines were called microcomputers (or, sometimes, personal computers).

The claim to the title of the first personal computer been fiercely contested, with some looking back as far as Wes Clark’s LINC or the Lincoln Labs TX-0, which, after all, were wielded interactively by a single user at a time. Putting aside strict questions of precedence, any claimant to significance based on historical causality must concede to one obvious champion. No other machine had the catalytic effect that the MITS Altair 8800 had, in bringing about the explosion of microcomputing in the late 1970s.

The Altair 8800, atop optional 8-inch floppy disk unit

The Altair fell into the electronic hobbyist community like a seed crystal. It convinced hobbyists that it was possible for a person build and own their own computer at a reasonable price, and they coalesced into communities to discuss their new machines, like the Homebrew Computer Club in Menlo Park. Those hobbyist cells then launched the much wider wave of commercial microcomputing based on mass-produced machines that required no hardware skills to bring to life, such as the Apple II and Radio Shack TRS-80.

By 1984, 8% of U.S. households had their own computer, a total of some seven million machines1. Meanwhile, businesses were acquiring their own fleets of personal computers at the rate of hundreds of thousands per year, mostly the IBM 5150 and its clones2. At the higher end of the price range for single-user computers, a growing market had also appeared for workstations from the likes of Silicon Graphics and Sun Microsystems – beefier computers equipped standard with high-end graphical displays and networking hardware, intended for use by scientists, engineers and other technical specialists.

None of these machines would be invited to play in the rarefied world of ARPANET. Yet many of their users wanted access to the promised fusion of computers and communications that academic theorists had been talking up in the popular press since Taylor and Licklider’s 1968 “Computer As a Communication Device,” and even before. As far back as 1966, computer scientist John McCarthy had promised in Scientific American that “[n]o stretching of the demonstrated technology is required to envision computer consoles installed in every home and connected to public-utility computers through the telephone system.”  The range of services such a system could offer, he averred, would be impossible to enumerate, but he put forth a few examples: “Everyone will have better access to the Library of Congress than the librarian himself now has. …Full reports on current events, whether baseball scores, the smog index in Los Angeles or the minutes of the 178th meeting of the Korean Truce Commission, will be available for the asking. Income tax returns will be automatically prepared on the basis of continuous, cumulative annual records of income, deductions, contributions and expenses.”

Articles in the popular press described the possibilities for electronic mail, digital games, services of all kinds from legal and medical advice to online shopping. But how, practically, would all these imaginings take shape? Many answers were in the offing. In hindsight, this era bears the aspect of a broken mirror. All of the services and concepts that would characterize the commercial internet of the 1990s – and then some – were manifest in the 1980s, but in fragments, scattered piecemeal across dozens of different systems. With a few exceptions3, these systems did not interconnect, each stood isolated from the others, a “walled garden,” in later terminology. Users on one system had no way to communicate or interact with those on another, and the quest to attract more users was thus for the most part a zero-sum game.

In this installment, we’ll consider one set of participants in this new digital land grab, time-sharing companies looking to diversity into a new market with attractive characteristics.

Load Factor

In 1892, Samuel Insull, a protégé of Thomas Edison, headed west and to lead a new  branch of Edison’s electrical empire, the Chicago Edison Company. There he consolidated many of the core principles of modern utility management, among them the concept of the load factor – the average load on the electrical system divided by its highest load. The higher the load factor the better, because any deviation below 1/1 represents waste – expensive capital capacity that’s needed to handle the peak of demand, but left idle in the troughs. Insull therefore set out to fill in the troughs in the demand curve by developing new classes of customers that would use electricity at different times of day (or even in different seasons), even if it meant offering them discounted rates. In the early years of electrical power, the primary demand came from domestic lighting, with most demand in the evening. So Insull promoted its use for industrial machinery to increase daytime use. This still left dips in the morning and evening rush, so he convinced the Chicago streetcar systems convert to electrical traction. And so Insull maximized the value of his capital investments, even though it often meant offering lower prices[^hughes].

Insull in 1926, when he was pictured on the cover of Time magazine.

[^hughes]: Thomas P. Hughes, Networks of Power (1983), 216-225.

The same principles still applied to capital investments in computers nearly a century later, and it was exactly the desirability of a balanced load factor and the incentive for offering lower off-peak prices that made possible two new online services for microcomputers that launched nearly simultaneously in the summer of 1979: CompuServe and The Source.

CompuServe

In 1969, the newly-formed Golden United Life Insurance company of Columbus, Ohio created a subsidiary called the Compu-Serv Network. The founder of Golden United wanted to be a cutting-edge, high-tech company with computerized records, and so he had hired a young computer science grad named John Goltz to lead the effort. Goltz, however, was gulled by a DEC salesman into buying a PDP-10, an expensive machine with far more computer power than Golden United currently needed. The idea behind Compu-Serv was to turn that error into an opportunity, by selling the excess computer power to paying customers who would dial into the Compu-Serv PDP-10 via a remote terminal. In the late 1960s this time-sharing model for selling computer service was spreading rapidly, and Golden United wanted to get its own cut of the action. In the 1970s the time-sharing subsidiary spun off to operate independently, re-branded itself as CompuServe, and built its own packet-switching network in order to be able to offer affordable, nationwide access to its computer centers in Columbus.

A national market not only gave the company access to more potential customers, it also extended the demand curve for computer time, by spreading it across four time zones. Nonetheless, there were still a large gulf of time between the end of business hours in California and the start of business on the East Coast, not to mention the weekends. CompuServe CEO Jeff Wilkins saw an opportunity in the growing fleet of home computers, many of whose owners whiled away their evening and weekend hours on their electronic hobby. What if they were offered access to email, message boards, and games on CompuServe computers, at discounted rates for evening and weekend access ($5 an hour, versus $12 during the work day4)?

So Wilkins launched a trial of a service he called MicroNET (intentionally held at arms length from the main CompuServe brand) and after a slow start it gradually proved a resounding success. Because of CompuServe’s national data network, most users only had to dial a local number to reach MicroNET, and thus avoided long-distance telephone charges, despite the fact that the actual computers they were connecting to resided in Ohio. His experiment having proved itself, Wilkins dropped the MicroNET name and folded the service under the CompuServe brand. Soon the company began to offer services tailored to the needs of microcomputer users, such as games and other software available for sale on-line.

But by far the most popular services were the communications platforms. For long-lived public content and discussions there were the forums, ranging across every topic from literature to medicine, from woodworking to pop music. Forums were generally left to their own devices by CompuServe, being administered and moderated by ordinary users who took on the role of “sysops” for each forum. The other main communications platform was the “CB Simulator”, coded up over the weekend by Sandy Trevor, a CompuServe executive. Named after citizen band (CB) radio, a popular hobby at the time, it allowed users to have text-based chats in real-time in dedicated channels, a similar model to the ‘talk’ programs offered on many time-sharing systems. Many dedicated users would hang out for hours on CB Simulator, shooting the breeze, making friends, or even finding lovers.

The Source

Hot on the heels of MicroNET – launching just eight days later in July of 1979 – came another on-line service for microcomputers that arrived at essentially the same place as Jeff Wilkins, despite starting from a very different angle. William (Bill) Von Meister, a son of German immigrants, whose father had helped establish zeppelin service between Germany and the U.S., was a serial enterpreneur. He no sooner got some new enterprise off the ground than he lost interest, or was forced out by disgruntled financial backers. He could not have been more different than the steady Wilkins. As of the mid-1970s, his greatest successes to date were in electronic communications – Telepost, a service which sent messages across the country electronically to the switching center nearest its recipient, and then covered the last mile via next-day mail; and TDX, which used computers to optimize the routing of telephone calls, reducing the cost of long-distance telephone service within large businesses.

Having, predictably, lost interest in TDX, Von Meister’s newest enthusiasm in the late 1970s was Infocast, which he planned to launch in McClean, Virginia. In effect, it was an extension of the Telepost concept, except instead of using mail for the last mile delivery, he would use the FM radio sideband (basically the same mechanism that’s used to transmit station identification, artist, and song title to the screens of modern radios) to deliver digital data to computer terminals. In particular, he planned to target highly distributed business with lots of locations that needed regular information updates from their central office, such as banks, insurance companies, and grocery stores.

Bill Von Meister

But what Von Meister really wanted to build was a national network to deliver data into homes, to terminals by the millions, not thousands.  Convincing a business to spend $1000 on a special FM receiver and terminal was one thing, however, to ask the same of consumers was quite another matter. So Von Meister went casting about for another means to deliver news, weather, and other information into homes; and he found it, in the hundreds of thousands of microcomputers that were sprouting like mushrooms in american offices and dens, in homes ready-equipped with telephone connections. He partnered with Jack Taub, a deep-pocketed and well-connected businessman who loved the concept and wanted to invest. Taub and Von Meister initially called the new service CompuCom, a mix of truncation and compounding typical for a computer company of the day, but later settled on a much more abstract and visionary name – The Source.

The main problem they faced was a lack of any technical infrastructure with which to deliver this vision. To get it they partnered with two companies with, collectively, the same resources as CompuServe – time-shared computers and a national data communications network, both of which sat mostly idle on evenings and weekends. Dialcom, headquartered across the Potomac in Silver Springs, Maryland, provided the computing muscle. Like CompuServe, it had begun in 1970 as a time-sharing service5, though by the end of the decade it offered many other digital services. Telenet, the packet-switched network spun off by Bolt, Beranek and Newman earlier in the decade, provided the communications infrastructure. By paying discounted rates to Dialcom and Telenet for off-peak service, Taub and Von Meister were able to offer access to The Source for $2.75 an hour on nights and weekends, after an initial $100 membership fee6

Other than the pricing structure, the biggest difference between The Source and CompuServe was how they expected people to use their systems. The early services that CompuServe offered, such as email, the forums, CB, and the software exchange, generally assumed that users would form their own communities and build their own superstructures atop a basic hardware and software foundation, much like corporate users of time-sharing systems. Taub and Von Meister, however, had no cultural background in time-sharing. Their business plan centered around providing large amounts of information for the upscale, professional consumer: a New York Times database, United Press International news wires, stock information from Dow Jones, airline pricing, local restaurant guides, wine lists. Perhaps the single most telling detail was that Source users were welcomed by a menu of service options on log-in, CompuServe users by a command line.

In keeping with the personality differences between Wilkins and Von Meister, the launch of The Source was as grandiose as MicroNET’s was subtle, including a guest appearance by Isaac Asimov to announce the arrival of science fiction become science fact. Likewise in keeping with Von Meister’s personality and his past, his tenure at The Source would not be lengthy. The company immediately ran into financial difficulties due to his massive overspending. Taub and his brother had a large enough ownership share to oust Von Meister, and they did just that in October of 1979, just a few months after the launch party.

The Decline of Time-Sharing

The last company to enter the microcomputing market due to the logic of load factor was General Electric Information Services (GEIS), a division of the electrical engineering giant. Founded in the mid-1960s, when GE was still trying to compete in the computer manufacturing business, GEIS was conceived as a way to try to outflank IBM’s dominant position in computer sales. Why buy from them, GE pitched, when you can rent from us? The effort made little dent in IBM’s market share, but made enough money to receive continued investment into the 1980s, by which point GEIS owned a worldwide data network and two major computing centers one of them in Cleveland, Ohio and the other in Europe.

In 1984, someone at GEIS noticed the growth of The Source and CompuServe (the latter had, by that time, over 100,000 users), and saw a way to put their computing centers to work in off-peak hours. To build their own consumer offering they recruited a CompuServe veteran, Bill Louden. Louden, disgruntled with managers from the corporate sales side who began muscling in on the increasingly lucrative consumer business, had jumped ship with a group of fellow defectors to try to build their own online service in Atlanta, called Georgia OnLine. They tried to turn the lack of access to a national data network into a virtue, by offering services tailored for the local market, such as an events guide and classified ads, but the company went bust, so Louden was very receptive to the offer from GEIS.

Louden called the new service GEnie, a backronym for General Electric Network for Information Exchange. It offered all of the services that The Source and CompuServe had by now made table stakes in the market – a chat application (CB simulator), bulletin boards, news, weather, and sports information.

GEnie was the last personal computing service born out of the time-sharing industry and the logic of the load factor. By the mid-1980s, the entire economic balance of power had begun to shift. As small computers proliferated in the millions, offering digital services to the mass market became a more and more enticing business in its own right, rather than simply a way to leverage existing capital. In the early days, The Source and CompuServe were tiny, with only a few thousand subscribers each in 1980. A decade later, millions of subscribers paid monthly for on-line services in the U.S. – with CompuServe at the forefront of the market, having absorbed its erstwhile rival, The Source. The same process also made time-sharing less attractive to businesses – why pay all the telecommunications costs and overhead of accessing a remote computer owned by someone else, when it was becoming so easy to equip your own office with powerful machines? Not until fiber optics drove the unit cost of communications into the ground would this logic reverse direction again.

Time-sharing companies were not the only route to the consumer market, however. Rather than starting with mainframe computers and looking for places to put them to work, others started from the appliance that millions already had in their homes, and looked for ways to connect it to a computer.



ARPANET, Part 1: The Inception

By the mid-1960s, the first time-sharing systems had already recapitulated the early history of the first telephone exchanges. Entrepreneurs built those exchanges as a means to allow subscribers to summon services such as a taxi, a doctor, or the fire brigade. But those subscribers soon found their local exchange just as useful for communicating and socializing with each other1. Likewise time-sharing systems, initially created to allow their users to “summon” computer power, had become communal switchboards with built-in messaging services2. In the decade to follow, computers would follow the next stage in the history of the telephone – the interconnection of exchanges to form regional and long-distance networks. The Ur-Network The first attempt to actually connect multiple computers into a larger whole was the ur-project of interactive computing itself, the SAGE air defense system. Because each of the twenty-three SAGE direction centers covered a particular geographical area, some mechanism was needed for handing off radar tracks from one center to another when incoming aircraft crossed a boundary between those areas. The SAGE designers dubbed this problem “cross-telling,” and they solved it by building data links on dedicated AT&T phone lines among all the neighboring direction centers. Ronald Enticknap, part of a small Royal Air Force delegation to SAGE, oversaw the design and implementation of this subsystem. Unfortunately, I have found no detailed description of the cross-telling function, but evidently each direction center computer determined when a track was crossing into another sector and sent its record over the phone line to that sector’s computer, where it could be picked up by an operator monitoring a terminal there3. The SAGE system’s need to translate digital data into an analog signal over the phone line (and then back again at the receiving station) occasioned AT&T to develop the Bell 101 “dataset”, which could deliver a modest 110 bits per second. This kind of device was later called a “modem”, for its ability to modulate the analog telephone signal using an outgoing series of digital data, and demodulate the bits from the incoming wave form. SAGE  thus laid some important technical groundwork for later computer networks. The first computer network of lasting significance, however, is one whose name is well known even today: ARPANET. Unlike SAGE, it connected a diverse set of time-shared and batch-processing hardware each with its own custom software, and was intended to be open-ended in scope and function, fulfilling whatever purposes users might desire of it. ARPA’s section for computer research – the Information Processing Techniques Office (IPTO) –  funded the project under the direction of Robert Taylor, but the idea for such a network sprang from the imagination of that office’s first director, J.C.R. Licklider. The Vision As we learned earlier, Licklider, known to his colleagues as ‘Lick,’ was a psychologist by training. But he became entranced with interactive computing while working on radar systems at Lincoln Laboratory in the late 1950s. This passion led him to fund some of the first experiments in time-shared computing when he became the director of the newly-formed IPTO, a position he took in 1962. By that time, he was already looking ahead to the possibility of linking isolated interactive computers together into a larger superstructure. In his 1960 paper on “man-computer symbiosis”, he wrote that [i]t seems reasonable to envision …a ‘thinking center’ that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions suggested earlier in this paper. The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. Just as the TX-2 had kindled Licklider’s excitement over interactive computing, it may have been the SAGE computer network that prompted Licklider to imagine that a variety of interactive computing centers could be connected together to provide a kind of telephone network for intellectual services. Whatever its exact origin, Licklider began disseminating this vision among the community of researchers that he had created at IPTO, most famously in his memo of April 23, 1963, directed to the “Members and Affiliates of the Intergalactic Computer Network,” that is to say the various researchers receiving IPTO funding for time-sharing and other computing projects. The memo is rambling and shambolic, evidently dictated on the fly with little to no editorial revision. Determining exactly what Licklider intended it to say about computer networks therefore requires some speculative inference. But several significant clues stand out. First, Licklider revealed he sees the “various activities” funded by IPTO as in fact belonging to a single “overall enterprise.”  He follows this pronouncement by discussing the need to allocate money and projects to maximize the advantage accruing to that enterprise, as network of researchers as a whole, given that, “to make progress, each of the active researchers needs a software base and a hardware facility more complex and more extensive than he, himself, can create in reasonable time.” To achieve this global efficiency might, Licklider conceded, requires some individual concessions and sacrifices by certain parties. Then Licklider began to explicitly discuss computer (rather than social) networks. He wrote of the need for some sort of network control language (what would later be called a protocol) and his desire to eventually see an IPTO computer network consisting of “..at least four large computers, perhaps six or eight small computers, and a great assortment of disc files and magnetic tape units–not to mention the remote consoles and teletype stations…” Finally, he spent several pages laying out a concrete example of how a future interaction with such a computer network might play out. Licklider imagines a situation where he is running an analysis on some experimental data. “The trouble is,” he writes, “I do not have a good grid-plotting program. …Is there a suitable grid-plotting program anywhere in the system? Using prevailing network doctrine, I interrogate first the local facility, and then other centers. Let us suppose that I am working at SDC, and that I find a program that looks suitable on a disc file in Berkeley.” He asks the network to execute this program for him, assuming that, “[w]ith a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data.” Taken together, these fragments of thought appear to reveal a larger scheme in Licklider’s mind: first, to parcel out particular specialties and areas of expertise among IPTO-funded researchers, and then to build beneath that social community a physical network of IPTO computers. This physical instantiation of IPTO’s “overall enterprise” would allow researchers to share in and benefit from the specialized hardware and software resources at each site. Thus IPTO would avoid wasteful duplication while amplifying the power of each funding dollar by allowing every researcher to access the full spectrum of computing capabilities across all of IPTO’s projects. This idea, of resource-sharing among the research community via a communications network, sowed the seeds within IPTO that led, several years later, to the creation of ARPANET. Despite its military provenance, originating as it did in the halls of the Pentagon, ARPANET thus had no real military justification. It is sometimes said that the network was designed as a war-hardened communications network, capable of surviving a first-strike nuclear attack. There is a loose connection, as we’ll see later, between ARPANET and an earlier project with that aim, and ARPA’s leaders occasionally trotted out the “hardened systems” idea to justify their network’s existence before Congress or the Secretary of Defense. But in truth, IPTO built ARPANET purely for its own internal purposes, to support its community of researchers – most of whom themselves lacked any direct defense justification for their activities. Meanwhile, by the time of his famous memo Licklider had already begun planning the germ of his intergalactic network, to be led by Len Kleinrock at UCLA. The Precursors Kleinrock, the son of working class immigrants from Eastern Europe, grew up in Manhattan in the shadow of the George Washington Bridge. He worked his way through school, taking evening sessions at City College to study electrical engineering. When he heard about a fellowship opportunity for graduate study at MIT, capped by a semester of full time work at Lincoln Lab, he jumped at the opportunity. Though built to serve the needs of SAGE, Lincoln had since diversified into many other research projects, often tangentially related to air defense, at best. Among them was the Barnstable Study, a concept floated by the Air Force to create an orbital belt of metallic strips (similar to chaff) to use as reflectors for a global communication system4. Kleinrock had fallen under the spell of Claude Shannon at MIT, and so decided to focus his graduate work on the theory of communication networks. The Barnstable Study provided Kleinrock with his first opportunity to apply the tools of information and queuing theory to a data network, and he extended that analysis into a full dissertation on “communications nets,” combining his mathematical analysis with empirical data gathered by running simulations on Lincoln’s TX-2 computers. Among Kleinrock’s close colleagues at Lincoln, sharing time with him in front of the TX-2, were Larry Roberts and Ivan Sutherland, whom we will meet again shortly. By 1963, Kleinrock had accepted a position at UCLA, and Licklider saw an opportunity – here he had an expert in data networking at a site with three local computer centers: the main computation center, the health sciences computer center, and the Western Data Processing Center (a cooperative of thirty institutions with shared access to an IBM computer). Moreover, six of the Western Data Processing Center institutions had remote connections to the computer by modem, and the IPTO-sponsored System Development Corporation (SDC) computer resided just a few miles away in Santa Monica. IPTO issued a contract to UCLA to interconnect these four centers, as a first experiment in computer networking. Later, according to the plan, a connection with Berkeley would tackle the problems inherent in a longer-range data connection. Despite the promising situation, the project foundered and the network was never built. The directors of the different UCLA centers didn’t trust one other, nor fully believe in the project, and they refused to cede control over their computing resources to one another’s users. IPTO had little leverage to influence the situation, since none of the UCLA computing centers were funded directly by ARPA5. IPTO’s second try at networking proved more successful, perhaps because it was significantly more limited in scope – a mere experimental trial rather than a pilot plant. In 1965, a psychologist and disciple of Licklider’s named Tom Marill left Lincoln Lab to try to profit from the excitement around interactive computing by starting his own time-sharing business. Lacking much in the way of actual paying customers, however, he began casting about for other sources of income, and thus proposed that IPTO fund him to carry out a study of computer networking. IPTO’s new director, Ivan Sutherland, decided to bring a larger and more reputable partner on board  as ballast, and so sub-contracted the work to Marill’s company via Lincoln Lab. Heading things from the Lincoln side would be another of Kleinrock’s old office-mates, Lawrence (Larry) Roberts. Roberts had cut his teeth on the Lincoln-built TX-0 as an undergrad at MIT. He spent hours each day entranced before the glowing console screen, eventually constructing a program to (badly) recognize written characters using neural nets. Like Kleinrock he ended up working at Lincoln for his graduate studies, solving computer graphics and computer vision problems, such as edge-detection and three-dimensional rendering, on the larger and more powerful TX-2. Up until late 1964, Roberts had remained entirely focused on his imaging research. Then he came across Lick. In November of that year, he attended an Air Force-sponsored conference on the future of computing at the Homestead hot springs resort in western Virginia. There he talked late into the night with his fellow conference participants, and for the first time heard Lick expound on his idea for an Intergalactic Network. Roberts began to feel a tickle at the back of his brain – he had done great work on computer graphics, but it was in effect trapped on the one-of-a-kind TX-2. No one else could use his software, even if he had way to provide it to them, because no one else had equivalent hardware to run it on. The only way to extend the influence of his work was to report on it in academic papers in the hopes that others would and could replicate it elsewhere. Licklider was right, he decided, a network was exactly the next step needed to accelerate computing research. And so Roberts found himself working with Marill, trying to connect the Lincoln TX-2 with a cross-country link to the SDC computer in Santa Monica, California. In an experimental design that could have been ripped straight from Licklider’s “Intergalactic Network” memo, they planned to have the TX-2 pause in the middle of a computation, use an automatic dialer to remotely call the SDC Q-32, invoke a matrix multiply program on that computer, and then continue the original computation with the answer. Setting aside the basic sensibility of using dearly-bought cutting-edge technology to span a continent in order to use a basic math routine, the whole process was painfully slow due to the use of the dial telephone network. To make a telephone call required setting up a dedicated circuit between the caller and recipient, usually routed through several different switching centers. As of 1965, virtually all of these were electro-mechanical6. Magnets shifted metal bars from one place to another in order to complete each step of the circuit. This whole process took several seconds, during which time the TX-2 could only sit idle and wait. Moreover the lines, though perfectly suited for voice conversation, were noisy with respect to individual bits and supported very low bandwidth (a couple hundred bits per second). A truly effective intergalactic, interactive, network, would require a different approach.[^others] The Marill-Roberts experiment had not shown long-distance networking to be practical or useful, merely theoretically possible. But that was enough. The Decision In the middle of 1966, Robert Taylor took over the directorship of IPTO, succeeding Ivan Sutherland as the third to hold that title. A disciple of Licklider and a fellow-psychologist, he came to IPTO by way of a position administering computer research for NASA. Nearly as soon as he arrived, Taylor seems to have decided that the time had come to make the intergalactic network a reality, and it was Taylor who launched the project that produced ARPANET. ARPA money was still flowing freely, so Taylor had no trouble securing the extra funding from his boss, Charles Herzfeld. Nonetheless, the decision carried significant risk of failure. Other than the very limited 1965 cross-country connection, no one had ever attempted anything like ARPANET. One could point to other early experiments in computer networking. For example, Princeton and Carnegie-Mellon set up a network of time-shared computers in the late 1960s in conjunction with IBM.7 The main distinction between these and the ARPA efforts was their uniformity – they used exactly the same computer system hardware and software at each site. ARPANET, on the other hand, would be bound to deal with diversity. By the mid-1960s, IPTO was funding well over a dozen sites, each with its own computer, and each of those computers had a different hardware design and operating software. The ability to share software was rare even among different models from a single manufacturer – only the brand-new IBM System/360 product line had attempted this feat. This diversity of systems was a risk that added a great deal of technical complexity to the network design, but also an opportunity for Licklider-style resource sharing. The University of Illinois, for example, was in the midst of construction on the massive, ARPA-funded ILLIAC IV supercomputer. It seemed improbable to Taylor that the local users at Urbana-Champaign could fully utilize this huge machine. Even sites with systems of more modest scale – the TX-2 at Lincoln and the Sigma-7 at UCLA, for example, could not normally share software due to their basic incompatibilities. The ability to overcome this limitation by directly accessing the software at one site from another was attractive. In the paper describing their networking experiment, Marill and Roberts had suggested that this kind of resource sharing would produce something akin to Ricardian comparative advantage among computing sites: The establishment of a network may lead to a certain amount of specialization among the cooperating installations. If a given installation, X, by reason of special software or hardware, is particularly adept at matrix inversion, for example, one may expect that users at other installations in the network will exploit this capability by inverting their matrices at X in preference to doing so on their home computers.[^ricardo] Taylor had one further motivation for proceeding with a resource-sharing network. Purchasing a new computer for each new IPTO site, with all the capabilities that might be required by the researchers at that site, had proven expensive, and as one site after another was added to IPTO’s portfolio, the budget for each was becoming thinly stretched. By putting all the IPTO-funded systems onto a single network, it might be possible to supply new grantees with more limited computers, or perhaps even none at all. They could draw whatever computer power they needed from a remote site with excess capacity, the network as whole acting as a communal reservoir of hardware and software. Having launched the project and secured its funding, Taylor’s last notable contribution to ARPANET was to select someone to actually design the system and see it through to completion. Roberts was the obvious choice. His engineering bona fides were impeccable, he was already a respected member of the IPTO research community, and he was one of of a handful of people with hands-on experience designing and building a long-distance computer network. So in the fall of 1966, Taylor called Roberts to ask him to come down from Massachusetts to work for ARPA in Washington. But Roberts proved difficult to entice. Many of the IPTO principal investigators cast a skeptical eye on the reign of Robert Taylor, whom they viewed as something of a lightweight. Yes, Licklider had been a psychologist too, with no real engineering chops, but at least he had a doctorate, and a certain credibility earned as one of the founding fathers of interactive computing. Taylor was an unknown with a mere master’s degree. How could he oversee the complex technical work going on within the IPTO community? Roberts counted himself among these skeptics. But a combination of stick and carrot did their work. On the one hand Taylor exerted a certain pressure on Roberts’ boss at Lincoln, reminding him that a substantial portion of his lab’s funding now came from ARPA, and that it would behoove him to encourage Roberts to see the value in the opportunity on offer. On the other hand, Taylor offered Roberts the newly-minted title of “Chief Scientist”, a position that would report over Taylor’s head directly to a Deputy Director of ARPA, and mark Roberts as Taylor’s successor to the directorship. On these terms Roberts agreed to take on the ARPANET project.8 The time had come to turn the vision of resource-sharing into reality. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)  

Read more
High-Pressure, Part I: The Western Steamboat

The next act of the steamboat lay in the west, on the waters of the Mississippi basin. The settler population of this vast region—Mark Twain wrote that “the area of its drainage-basin is as great as the combined areas of England, Wales, Scotland, Ireland, France, Spain, Portugal, Germany, Austria, Italy, and Turkey”—was already growing rapidly in the early 1800s, and inexpensive transport to and from its interior represented a tremendous economic opportunity.[1] Robert Livingston scored another of his political coups in 1811, when he secured monopoly rights for operating steamboats in the New Orleans Territory. (It did not hurt his cause that he himself had negotiated the Louisiana Purchase, nor that his brother Edward was New Orleans’ most prominent lawyer.) The Fulton-Livingston partnership built a workshop in Pittsburgh to build steamboats for the Mississippi trade. Pittsburgh’s central position at the confluence of Monangahela and Allegheny made it a key commercial hub in the trans-Appalachian interior and a major boat-building center. Manufactures made there could be distributed up and down the rivers far more easily than those coming over the mountains from the coast, and so factories for making cloth, hats, nails, and other goods began to sprout up there as well.[2] The confluence of river-based commerce, boat-building and workshop know-how made Pittsburgh the natural wellspring for western steamboating. Figure 1: The Fulton-Livingston New Orleans. Note the shape of the hull, which resembles that of a typical ocean-going boat. From Pittsburgh, The Fulton-Livingston boats could ride downstream to New Orleans without touching the ocean. The New Orleans, the first boat launched by the partners, went into regular service from New Orleans to Natchez (about 175 miles to the north) in 1812, but their designs—upscaled versions of their Hudson River boats—fared poorly in the shallow, turbulent waters of the Mississippi. They also suffered sheer bad luck: the New Orleans grounded fatally in 1814, the aptly-named Vesuvius burnt to the waterline in 1816 and had to be rebuilt. The conquest of the Mississippi by steam power would fall to other men, and to a new technology: high-pressure steam. Strong Steam A typical Boulton & Watt condensing engine was designed to operate with steam below the pressure of the atmosphere (about fifteen pounds per square inch (psi)). But the possibility of creating much higher pressures by heating steam well above the boiling point was known for well over a century. The use of so-called “strong steam” dated back at least to Denis Papin’s steam digester from the 1670s. It even had been used to do work, in pumping engines based on Thomas Savery’s design from the early 1700s, which used steam pressure to push water up a pipe. But engine-builders did not use it widely in piston engines until well into the nineteenth century. Part of the reason was the suppressive influence of the great James Watt. Watt knew that expanding high-pressure steam could drive a piston, and laid out plans for high-pressure engines as early as 1769, in a letter to a friend: I intend in many cases to employ the expansive force of steam to press on the piston, or whatever is used instead of one, in the same manner as the weight of the atmosphere is now employed in common fire-engines. In some cases I intend to use both the condenser and this force of steam, so that the powers of these engines will as much exceed those pressed only by the air, as the expansive power of the steam is greater than the weight of the atmosphere. In other cases, when plenty of cold water cannot be had, I intend to work the engines by the force of steam only, and to discharge it into the air by proper outlets after it has done its office.[3] But he continued to rely on the vacuum created by his condenser, and never built an engine worked “by the force of steam only.” He went out of his way to ensure that no one else did either, deprecating the use of strong steam at every opportunity. There was one obvious reason why: high-pressure steam was dangerous. The problem was not the working machinery of the engine but the boiler, which was apt to explode, spewing shrapnel and superheated steam that could kill anyone nearby. Papin had added a safety valve to his digester for exactly this reason. Savery steam pumps were also notorious for their explosive tendencies. Some have imputed a baser motive for Watt’s intransigence: a desire to protect his own business from high-pressure competition. In truth, though, high-pressure boilers did remain dangerous, and would kill many people throughout the nineteenth century. Unfortunately, the best material for building a strong boiler was the most difficult from which to actually construct one. By the beginning of the nineteenth century copper, lead, wrought iron, and cast iron had all been tried as boiler materials, in various shapes and combinations. Copper and lead were soft, cast iron was hard, but brittle. Wrought iron clearly stood out as the toughest and most resilient option, but it could only be made in ingots or bars, which the prospective boilermaker would then have to flatten and form into small plates, many of which would have to be joined to make a complete boiler. Advances in two fields in the decades around 1800 resolved the difficulties of wrought iron. The first was metallurgical. In the late eighteenth century, Henry Cort invented the “puddling” process of melting and stirring iron to oxidize out the carbon, producing larger quantities of wrought iron that could be rolled out into plates of up to about five feet long and a foot wide.[4] These larger plates still had to be riveted together, a tedious and error-prone process, that produced leaky joints. Everything from rope fibers to oatmeal was tried as a caulking material. To make reliable, steam-tight joints required advances in machine tooling. This was a cutting-edge field at the time (pun intended). For example, for most of history craftsmen cut or filed screws by hand. The resulting lack of consistency meant that many of the uses of screws that we take for granted were unknown: one could not cut 100 nuts and 100 bolts, for example, and then expect to thread any pair of them together. Only in the last quarter of the eighteenth centuries did inventors craft sufficiently precise screw-cutting lathes to make it possible to repeatedly produce screws with the same length and pitch. Careful use of tooling similarly made it possible to bore holes of consistent sizes in wrought iron plates, and then manufacture consistently-sized rivets to fit into them, without the need to hand-fit rivets to holes.[5] One could name a few outstanding early contributors to the improvement of machine tooling in the first decades of the nineteenth century Arthur Woolf in Cornwall, or John Hall at the U.S. Harper’s Ferry Armory. But the steady development of improvements in boilers and other steam engine parts also involved the collective action of thousands of handcraft workers. Accustomed to building liquor stills, clocks, or scientific instruments, they gradually developed the techniques and rules of thumb needed for precision metalworking for large machines.[6] These changes did not impress Watt, and he stood by his anti-high-pressure position until his death in 1819. Two men would lead the way in rebelling against his strictures. The first appeared in the United States, far from Watt’s zone of influence, and paved the way for the conquest of the Western waters. Oliver Evans Oliver Evans was born in Delaware in 1755. He first honed his mechanical skills as an apprentice wheelwright. Around 1783, he began constructing a flour mill with his brothers on Red Clay Creek in northern Delaware. Hezekiah Niles, a boy of six, lived nearby. Niles would become the editor of the most famous magazine in America, from which post he later had occasion to recount that “[m]y earliest recollections pointed him out to me as a person, in the language of the day, that ‘would never be worth any thing, because he was always spending his time on some contrivance or another…’”[7] Two great “contrivances” dominated Evans’ adult life. The challenges of the mill work at Red Clay Creek led to his first great idea:  an automated flour mill. He eliminated most of the human labor from the mill by linking together the grain-processing steps with a series of water-powered machines (the most famous and delightfully named being the “hopper boy”). Though fascinating in its own right, for the purposes of our story the automated mill only matters in so far as it generated the wealth which allowed him to invest in his second great idea: an engine driven by high-pressure steam. Figure 2: Evans’ automated flour mill. In 1795, Evans published an account of his automatic mill entitled The Young Mill-Wright and Miller’s Guide. Something of his personality can be gleaned from the title of his 1805 sequel on the steam engine: The Abortion of the Young Steam Engineer’s Guide. A bill to extend the patent on his automatic flour mill failed to pass Congress in 1805, and so he published his Abortion as a dramatic swoon, a loud declaration that, in response this rebuff, he would be taking his ball and going home: His [i.e., Evans’] plans have thus proved abortive, all his fair prospects are blasted, and he must suppress a strong propensity for making new and useful inventions and improvements; although, as he believes, they might soon have been worth the labour of one hundred thousand men.[8] Of course, despite these dour mutterings, he failed entirely to suppress his “strong propensity,” in fact he was in the very midst of launching new steam engine ventures at this time. Like so many other early steam inventors, Evans’ interest in steam began with a dream of a self-propelled carriage. The first tangible evidence that we have of his interest in steam power comes from patents he filed in 1787 which included mention of a “steam-carriage, so constructed to move by the power of steam and the pressure of the atmosphere, for the purpose of conveying burdens without the aid of animal force.” The mention of “the pressure of the atmosphere” is interesting—he may have still been thinking of a low-pressure Watt-style engine at this point.[9] By 1802, however, Evans had a true high-pressure engine of about five horsepower operating at his workshop at Ninth and Market in Philadelphia. He had established himself in that city in 1792, the better to promote his milling inventions and millwright services. He attracted crowds to his shop with his demonstration of the engine at work: driving a screw mill to pulverize plaster, or cutting slabs of marble with a saw. Bands of iron held reenforcing wooden slats against the outside of the boiler, like the rim of a cartwheel or the hoops of a barrel. This curious hallmark testified to Evans’ background as a millwright and wheelwright [10] The boiler, of course, had to be as strong as possible to contain the superheated steam, and Evans’ later designs made improvements in this area. Rather than the “wagon” boiler favored by Watt (shaped like a Conestoga wagon or a stereotypical construction worker’s lunchbox), he used a cylinder. A spherical boiler being infeasible to make or use, this shape distributed the force of the steam pressure as evenly as practicable over the surface. In fact, Evans’ boiler consisted of two cylinders in an elongated donut shape, because rather than placing the furnace below the boiler, he placed it inside, to maximize the surface area of water exposed to the hot air. By the time of the Steam Engineer’s Guide, he no longer used copper braced with wood, he now recommended the “best” (i.e. wrought) iron “rolled in large sheets and strongly riveted together. …As cast iron is liable to crack with the heat, it is not to be trusted immediately in contact with the fire.”[11] Figure 3: Evan’s 1812 design, which he called the Columbian Engine to honor the young United States on the outbreak of the War of 1812. Note the flue carrying heat through the center of the boiler, the riveted wrought iron plates of the boiler, and the dainty proportions of the cylinder, in comparison to that of a Newcomen or Watt engine. Pictured in the corner is the Orukter Amphibolos. Evans was convinced of the superiority of his high-pressure design because of a rule of thumb that he had gleaned from the article “Steam” in the American edition of the Encylopedia Britannica: “…whatever the present temperature, an increase of 30 degrees doubles the elasticity and the bulk of water vapor.”[12] From this Evans concluded that heating steam to twice the boiling point (from 210 degrees to 420), would increase its elastic force by 128 times (since a 210 degree increase in temperature would make seven doublings). This massive increase in power would require only twice the fuel (to double the heat of the steam). None of this was correct, but it would not be the first or last time that faulty science would produce useful technology.[13] Nonetheless, the high-pressure engine did have very real advantages. Because the power generated by an engine was proportional to the area of the piston times the pressure exerted on that piston, for any given horsepower, a high-pressure engine could be made much smaller than its low-pressure equivalent. A high-pressure engine also did not require a condenser: it could vent the spent steam directly into the atmosphere. These factors made Evans’ engines smaller, lighter, and simpler and less expensive to build. A non-condensing high-pressure engine of twenty-four horsepower weighed half a ton and had a cylinder nine-inches across. A traditional Boulton & Watt style engine of the same power had a cylinder three times as wide and weighed four times as much overall.[14]   Such advantages in size and weight would count doubly for an engine used in a vehicle, i.e. an engine that had to haul itself around. In 1804 Evans sold an engine that was intended to drive a New Orleans steamboat, but it ended up in a sawmill instead. This event could serve as a metaphor for his relationship to steam transportation. He declared in his Steam Engineer’s Guide that: The navigation of the river Mississippi, by steam engines, on the principles here laid down, has for many years been a favourite object with the author and among the fondest wishes of his heart. He has used many endeavours to produce a conviction of its practicability, and never had a doubt of the sufficiency of the power.[15]   But steam navigation never got much more than his fondest wishes. Unlike a Fitch or a Rumsey, the desire to make a steamboat did not dominate his dreams and waking hours alike. By 1805, he was a well-established man of middle years. If he had ever possessed the Tookish spirit required for riverboat adventures, he had since lost it. He had already given up on the idea of a steam carriage, after failing to sell the Lancaster Turnpike Company on the idea in 1801. His most grandiosely named project, the Orukter Amphibolos, may briefly have run on wheels en route to serve as a steam dredge in the Philadelphia harbor. If it functioned at all, though, it was by no means a practical vehicle, and it had no sequel. Evans’ attention had shifted to industrial power, where the clearest financial opportunity lay—an opportunity that could be seized without leaving Philadelphia. Despite Evans’ calculations (erroneous, as we have said), a non-condensing high-pressure engine was somewhat less fuel-efficient than an equivalent Watt engine, not more. But because of its size and simplicity, it could be built at half the cost, and transported more cheaply, too. In time, therefore, the Evans-style engine became very popular as a mill or factory engine in the capital- and transportation-poor (but fuel-rich) trans-Appalachian United States.[16] In 1806, Evans began construction on his “Mars Works” in Philadelphia, to serve market for engines and other equipment. Evans engines sprouted up at sawmills, flour mills, paper factories, and other industrial enterprises across the West. Then, in 1811, he organized the Pittsburgh Steam Engine Company, operated by his twenty-three-year-old son George, to reduce transportation costs for engines to be erected west of the Alleghenies.[17] It was around that nexus of Pittsburgh that Evans’ inventions would find the people with the passion to put them to work, at last, on the rivers. The Rise of the Western Steamboat The mature Mississippi paddle steamer differed from its Eastern antecedents in two main respects. First, in its overall shape and layout: a roughly rectangular hull with a shallow draft, layer cake decks, and machinery above the water, not under it. This design was better adapted to an environment where snags and shallows presented a much greater hazard than waves and high winds. Second, in the use of a high-pressure engine, or engines, with a cylinder mounted horizontally along the deck. Many historical accounts attribute both of these essential developments to a keelboatman named Henry Miller Shreve. Economic historian Louis Hunter effectively demolished this legend in the 1940s, but more recent writers (for example Shreve’s 1984 biographer, Edith McCall), have continued to perpetuate it. In fact, no one can say with certainty where most of these features came from because no one bothered to document their introduction. As Hunter wrote: From the appearance of the first crude steam vessels on the western waters to the emergence of the fully evolved river steamboat a generation later, we know astonishingly little of the actual course of technological events and we can follow what took place only in its broad outlines. The development of the western steamboat proceeded largely outside the framework of the patent system and in a haze of anonymity.[18] Some documents came to light in the 1990s, however, that have burned away some of the “haze,” with respect to the introduction of high-pressure engines.[19] the papers of Daniel French reveal that the key events happened in a now-obscure place called Brownsville (originally known as Redstone), about forty miles up the Monongahela from that vital center of western commerce, Pittsburgh. Brownsville was the point where anyone heading west on the main trail over the Alleghenies—which later became part of the National Road—would first reach navigable waters in the Mississippi basin. Henry Shreve grew up not far from this spot. Born in 1785 to a father who had served as a Colonel in the Revolutionary War, he grew up on a farm near Brownsville on land leased from Washington: one of the general’s many western land-development schemes.[20] Henry fell in love with the river life, and in by his early twenties had established himself with his own keelboat operating out of Pittsburgh. He made his early fortune off the fur trade boom in St. Louis, which took off after Lewis and Clark returned with reports of widespread beaver activity on the Missouri River.[21] In the fall of 1812, a newcomer named Daniel French arrived in Shreve’s neighborhood—a newcomer who already had experience building steam watercraft, powered by engines based on the designs of Oliver Evans. French was born in Connecticut 1770, and started planning to build steamboats in his early 20s, perhaps inspired by the work of Samuel Morey, who operated upstream of him on the Connecticut River. But, discouraged from his plans by the local authorities, French turned his inventive energies elsewhere for a time. He met and worked with Evans in Washington, D.C., to lobby Congress to extend the length of patent grants, but did not return to steamboats until Fulton’s 1807 triumph re-energized him. At this point he adopted Evans’ high-pressure engine idea, but added his own innovation, an oscillating cylinder that pivoted on trunions as the engine worked. This allowed the piston shaft to be attached to the stern wheel with a simple (and light) crank, without any flywheel or gearing. The small size of the high-pressure cylinder made it feasible to put the cylinder in motion. In 1810, a steam ferry he designed, for a route from Jersey City to Manhattan, successfully crossed and recrossed the North (Hudson) River at about six miles per hour. Nonetheless, Fulton, who still held a New York state monopoly, got the contract from the ferry operators.[22] French moved to Philadelphia and tried again, constructing the steam ferry Rebecca to carry passengers across the Delaware. She evidently did not produce great profits, because a frustrated French moved west again in the fall of 1812, to establish a steam-engine-building business at Brownsville.[23] His experience with building high-pressure steamboats—simple, relatively low-cost, and powerful—had arrived at the place that would benefit most from those advantages, a place, moreover, where the Fulton-Livingston interests held no legal monopoly. News about the lucrative profits of the New Orleans on the Natchez run had begun to trickle back up the rivers. This was sufficient to convince the Brownsville notables—Shreve among them—to put up $11,000 to form the Monongahela and Ohio Steam Boat Company in 1813, with French as their engineer. French had their first boat, Enterprise, ready by the spring of 1814. Her exact characteristics are not documented, but based on the fragmentary evidence, she seems in effect to have been a motorized keelboat: 60-80’ long, about 30 tons, and equipped with a twenty-horsepower engine. The power train matched that of French’s 1810 steam ferry, trunions and all.[24] The Enterprise spent the summer trading along the Ohio between Pittsburgh and Louisville. Then, in December, she headed south with a load of supplies to aid in the defense of New Orleans. For this important voyage into waters mostly unknown to the Brownsville circle, they called on the experienced keelboatman, Henry Shreve. Andrew Jackson had declared martial law, and kept Shreve and the Enterprise on military dutyin New Orleans. With Jackson’s aid, Shreve dodged the legal snares laid for him by the Fulton-Livingston group to protect their New Orleans monopoly. Then in May, after the armistice, he brough the Enterprise on a 2,000-mile ascent back to Brownsville, the first steamboat ever to make such a journey. Shreve became an instant celebrity. He had contributed to a stunning defeat for the British at New Orleans, carried out an unprecedent voyage. Moreover, he had confounded the monopolists: their attempt to assert exclusive rights over the commons of the river was deeply unpopular west of the Appalachians. Shreve capitalized on his new-found fame to raise money for his own steamboat company in Wheeling, Virginia. The Ohio at Wheeling ran much deeper than the Monongahela at Brownsvile, and Shreve would put this depth to use: he had ambitions to put a French engine into a far larger boat than the Enterprise. Spurring French to scale up his design was probably Shreve’s largest contribution to the evolution of the western steamboat. French dared not try to repeat his oscillating cylinder trick on the larger cylinder that would drive Shreve’s 100-horsepower, 400-ton two-decker. Instead, he fixed the cylinder horizontally to the hull, and then attached the piston rod to a connecting rod, or “pitman,” that drove the crankshaft of the stern paddle wheel. He thus transferred the oscillating motion from the piston to the pitman, while keeping the overall design simple and relatively low cost.[25] Shreve called his steamer Washington, after his father’s (and his own) hero. Her maiden voyage in 1817, however, was far from heroic. Evans would have assured French that the high-pressure engine carried little risk: as he wrote in the Steam Engineer’s Guide, “we know how to construct [boilers] with a proportionate strength, to enable us to work with perfect safety.”[26] Yet on her first trip down the Ohio, with twenty-one passengers aboard, the Washington’s boiler exploded, killing seven passengers and three crew. The blast threw Shreve himself into the river, but he did not suffer serious harm.[27] Ironically, the only steamboat built by the Evans family, the Constitution (née Oliver Evans) suffered a similar fate in the same year, exploding and killing eleven on board. Despite Evans’ confidence in their safety, boiler accidents continued to bedevil steamboats for decades. Though the total numbers killed was not enormous—about 1500 dead across all Western rivers up to 1848—each event provided an exceptionally grisly spectacle. Consider this lurid account of the explosion of the Constitution: One man had been completely submerged in the boiling liquid which inundated the cabin, and in his removal to the deck, the skin had separated from the entire surface of his body. The unfortunate wretch was literally boiled alive, yet although his flesh parted from his bones, and his agonies were most intense, he survived and retained all his consciousness for several hours. Another passenger was found lying aft of the wheel with an arm and a leg blown off, and as no surgical aid could be rendered him, death from loss of blood soon ended his sufferings. Miss C. Butler, of Massachusetts, was so badly scalded, that, after lingering in unspeakable agony for three hours, death came to her relief.[28] In response to continued public outcry for an end to such horrors, Congress eventually stepped in, passing acts to improve steamboat safety in 1838 and 1852. Meanwhile, Shreve was not deterred by the setback. The Washington itself did not suffer grievous damage, so he corrected a fault in the safety valves and tried again. Passengers were understandably reluctant for an encore performance, but after the Washington made national news in 1817 with a freight passage from New Orleans to just twenty-five days, the public quickly forgot and forgave. A few days later, a judge in New Orleans refused to consider a suit by the Fulton-Livingston interests against Shreve, effectively nullifying their monopoly.[29] Now all comers knew that steamboats could ply the Mississippi successfully, and without risk of any legal action. The age of the western steamboat opened in earnest. By 1820, sixty-nine steamboats could be found on western rivers, and 187 a decade after that.[30] Builders took a variety of approaches to powering these boats: low-pressure engines, engines with vertical cylinders, engines with rocking beams or fly wheels to drive the paddles. Not until the 1830s did a dominant pattern take hold, but when it did it, it was that of the Evans/French/Shreve lineage, as found on the Washington: a high-pressure engine with a horizontal cylinder driving the wheel through an oscillating connecting rod.[31] Landscape " data-medium-file="https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L" data-large-file="https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI?w=739" loading="lazy" width="1024" height="840" src="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9" alt="" class="wp-image-14432" srcset="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9 1024w, https://cdn.accountdigital.net/FrglBM_683opK7_ejlnGd3iJ8vBT 150w, https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L 300w, https://cdn.accountdigital.net/FoNGy7IhZD2VDImZqLwjumC3NT84 768w, https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI 1247w" sizes="(max-width: 1024px) 100vw, 1024px">Figure 4: A Tennessee river steamboat from the 1860s. The distinctive features include a flat-bottomed hull with very little freeboard, a superstructure to hold passengers and crew, and twin smokestacks. The western steamboat had achieved this basic form by the 1830s and maintained it into the twentieth century. The Legacy of the Western Steamboat The Western steamboat was a product of environmental factors that favored the adoption of a shallow-drafted boat with a relatively inefficient but simple and powerful engine: fast, shallow rivers; abundant wood for fuel along the shores of those rivers; and the geographic configuration of the United States after the Louisiana Purchase, with a high ridge of mountains separating the coast from a massive navigable inland watershed. But, Escher-like, the steamboat then looped back around to reshape the environment from which it had emerged. Just as steam-powered factories had, steam transport flattened out the cycles of nature, bulldozing the hills and valleys of time and space. Before the Washington’s journey, the shallow grade that distinguished upstream from downstream dominated the life of any traveler or trader in the Mississippi. Now goods and people could move easily upriver, in defiance the dictates of gravity.[32] By the 1840s, steamboats were navigating well inland on other rivers of the West as well: up the Tombigbee, for example, over 200 miles inland to Columbus, Mississippi.[33] What steamboats alone could not do to turn the western waters into turnpike roads, Shreve and others would impose on them through brute force. Steamboats frequently sank or took major damage from snags or “sawyers”: partially submerged tree limbs or trunks that obstructed the water ways. In some places, vast masses of driftwood choked the entire river. Beyond Natchitoches, the Red River was obstructed for miles by an astonishing tangle of such logs known as the Great Raft.[34] Figure 5: A portrait of Shreve of unknown date, likely the 1840s. The scene outside the window reveals one of his snagboats, a frequently used device in nineteenth century portraits of inventors. Not only commerce was at stake in clearing the waterways of such obstructions; steamboats would be vital to any future war in the West. As early as 1814, Andrew Jackson had put Shreve’s Enterprise to good use, ferrying supplies and troops around the Mississippi delta region.[35] With the encouragement of the Monroe administration, therefore, Congress stepped in with a bill in 1824 to fund the Army’s corps of engineers to improve the western rivers. Shreve was named superintendent of this effort, and secured federal funds to build snagboats such as the Heliopolis, twin-hulled behemoths designed to drive a snag between its hulls and then winch it up onto the middle deck and saw it down to size. Heliopolis and its sister ships successfully cleared large stretches of the Ohio and Mississippi.[36] In 1833, Shreve embarked on the last great venture of his life: an assault on the Great Raft itself. It took six years and a flotilla of rafts, keelboats and steamboats to complete the job, including a new snagboat, Eradicator, built specially for the task.[37] The clearing of waterways, technical advancements in steamboat design, and other improvements (such as the establishment of fuel depots, so that time was not wasted stopping to gather wood), combined to drive travel times along the rivers down rapidly. In 1819, the James Ross completed the New Orleans to Louisville passage in sixteen-and-a-half days. In 1824 the President covered the same distance in ten-and-a-half days, and in 1833 the Tuscorora clocked a run of seven days, six hours. These ever-decreasing record times translated directly into ever-decreasing shipping rates. Early steamboats charged upstream rates equivalent to those levied by their keelboat competitors: about five dollars per hundred pounds carried from New Orleans to Louisville. By the early 1830s this had dropped to an average of about sixty cents per 100 pounds, and by the 1840s as low as fifteen cents.[38] By decreasing the cost of river trade, the steamboat cemented the economic preeminence of New Orleans. Cotton, sugar, and other agricultural goods (much of it produced by slave labor) flowed downriver to the port, then out to the wider world; manufactured goods and luxuries like coffee arrived from the ocean trade and were carried upriver; and human traffic, bought and sold at the massive New Orleans slave market, flowed in both directions.[39] In 1820 a steamboat arrived in New Orleans about every other day. By 1840 the city averaged over four arrivals a day; by 1850, nearly eight.[40] The population of the city burgeoned to over 100,000 by 1840, making it the third-largest in the country. Chicago, its big-shouldered days still ahead of it, remained a frontier outpost by comparison, with only 5,000 residents. Figure 6: A Currier & Ives lithograph of the New Orleans levee. This represents a scene from the late nineteenth century, way past the prime of New Orleans’ economic dominance, but still shows a port bustling with steamboats. But both New Orleans and the steamboat soon lost their dominance over the western economy. As Mark Twain wrote: Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature.[41] Several forces connived in the murder of the Mississippi steamboat, but a close cousin lurked among the conspirators: another form of transportation enabled by the harnessing of high-pressure steam. The story of the locomotive takes us back to Britain, and the dawn of the nineteenth century.

Read more
Internet Ascendant, Part 1: Exponential Growth

In 1990, John Quarterman, a networking consultant and UNIX expert, published a comprehensive survey of the state of computer networks. In a brief section on the potential future for computing, he predicted the appearance of a single global network for “electronic mail, conferencing, file transfer, and remote login, just as there is now one worldwide telephone network and one worldwide postal system.” But he did not assign any special significance to the Internet in this process. Instead, he assumed that the worldwide net would “almost certainly be run by government PTTs”, except in the United States, “where it will be run by the regional Bell Operating Companies and the long-distance carriers.” It will be the purpose of this post to explain how, in a sudden eruption of exponential growth, the Internet so rudely upset these perfectly natural assumptions. Passing the Torch The first crucial event in the creation of the modern Internet came in the early 1980s, when the Defense Communication Agency (DCA) decided to split ARPANET in two. The DCA had taken control of the network in 1975. By that time, it was clear that it made little sense for the ARPA Information Processing Techniques Office (IPTO), a blue sky research organization, to be involved in running a network that was being used for participants’ daily communications, not for research about communication. ARPA tried and failed to hand off the network to private control by AT&T. The DCA, responsible for the military’s communication systems, seemed the next best choice. For the first several years of this new arrangement, ARPANET prospered under a regime of benign neglect. However, by the early 1980s, the Department of Defense’s aging data communications infrastructure desperately needed an upgrade. The intended replacement, AUTODIN II, which DCA had contracted with Western Union to construct, was foundering. So DCA’s leaders appointed Colonel Heidi Hieden to come up with an alternative. He proposed to use the packet-switching technology that DCA already had in hand, in the form of ARPANET, as the basis for the new defense data network. But there was an obvious problem with sending military data over ARPANET – it was rife with long-haired academics, including some who were actively hostile to any kind of computer security or secrecy, such as Richard Stallman and his fellow hackers at the MIT Artificial Intelligence Lab. Heiden’s solution was to bifurcate the network. He would leave the academic researchers funded by ARPA on ARPANET, while splitting the computers used at national defense sites off onto a newly formed network called MILNET. This act of mitosis had two important consequences. First, by decoupling the militarized and non-militarized parts of the network, it was the initial step toward transferring the Internet to civilian, and eventually, private, control. Secondly, it provided the proving ground for the seminal technology of the Internet, the TCP/IP protocol, which had first been conceived half a decade before. DCA required all the ARPANET nodes to switch over to TCP/IP from the legacy protocol by the start of 1983. Few networks used TCP/IP at that point in time, but now it would link the two networks of the proto-Internet, allowing message traffic to flow between research sites and defense sites when necessary. To further ensure the long-term viability of TCP/IP for military data networks, Heiden also established a $20 million fund to pay computer manufacturers to write TCP/IP software for their systems (1). This first step in the gradual transfer of the Internet from the military to private control provides as good an opportunity as any to bid farewell to ARPA and the IPTO. Its funding and influence, under the leadership of J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, had produced, directly or indirectly, almost all of the early developments in interactive computing and networking. The establishment of the TCP/IP standard in the mid-1970s, however, proved to be the last time it played a central role in the history of computing (2). The Vietnam War provided the decisive catalyst for this loss of influence. Most research scientists had embraced the Cold war defense-sponsored research regime as part of a righteous cause to defend democracy. But many who came of age in the 1950s and 1960s lost faith in the military and its aims due to the quagmire in Vietnam. That included Taylor himself, who quit IPTO in 1969, taking his ideas and his connections to Xerox PARC. Likewise, the Democrat-controlled Congress, concerned about the corrupting influence of military money on basic scientific research, passed amendments requiring defense money to be directed to military applications. ARPA reflected this change in funding culture in 1972 by renaming itself as DARPA, the Defense Advanced Research Projects Agency. And so the torch passed to the civilian National Science Foundation (NSF). By 1980, with $20 million dollars in funding, the NSF accounted for about half of federal computer science research spending in the U.S, about $20 million (3). Much of that funding would soon be directed toward a new national computing network, the NSFNET. NSFNET In the early 1980s, Larry Smarr, a physicist at the University of Illinois, visited the Max Planck Institute in Munich, which hosted a Cray supercomputer that it made readily available to European researchers. Frustrated at the lack of equivalent resources for scientists in the U.S., he proposed that the NSF fund a series of supercomputing centers across the country (4). The organization responded to Smarr and other researchers with similar complaints by creating the Office of Advanced Scientific Computing in 1984, which went on to fund a total of five such centers, with a total five-year budget of $42 million. They stretched from Cornell in the northeast of the country to San Diego in the southwest. In between, Smarr’s own university (Illinois) received its own center, the National Center for Supercomputing Applications (NCSA). But these centers alone would only do so much to improve access to computer power in the U.S. Using the computers would still be difficult for users not local to any of the five sites, likely requiring a semester or summer fellowship to fund a long-term visit. And so NSF decided to also build a computer network. History was repeating itself – making it possible to share powerful computing resources with the research community was exactly what Taylor had in mind when he pushed for the creation of ARPANET back in the late 1960s. The NSF would provide a backbone that would span the continent by linking the core supercomputer sites, then regional nets would connect to those sites to bring access to other universities and academic labs. Here NSF could take advantage of the support for the Internet protocols that Heiden had seeded, by delegating the responsibility of creating those regional networks to local academic communities. Initially, the NSF delegated the setup and operation of the network to the NCSA at the University of Illinois, the source of the original proposal for a national supercomputer program. The NCSA, in turn, leased the same type of 56 kilobit-per-second lines that ARPANET had used since 1969, and began operating the network in 1986. But traffic quickly flooded those connections (5). Again mirroring the history of ARPANET, it quickly became obvious that the primary function of the net would be communications among those with network access, not the sharing of computer hardware among scientists. One can certainly excuse the founders of ARPNET for not knowing that this would happen, but how could the same pattern repeat itself almost two decades later? One possibility is that it’s much easier to justify a seven-figure grant to support the use of eight figures worth of computing power, than to justify dedicating the same sums to the apparently frivolous purpose of letting people send email to one another. This is not to say that there was willful deception on the part of the NSF, but that just as the anthropic principle posits that the physical constants of the universe are what they are because otherwise we couldn’t exist to observe them, so no publicly-funded computer network could have existed for me to write about without a somewhat spurious justification. Now convinced that the network itself was at least as valuable as the supercomputers that had justified its existence, NSF called on outside help to upgrade the backbone with 1.5 megabit-per-second T1 lines (6). Merit Network, Inc., won the contract, in conjunction with MCI and IBM, securing $58 million in NSF funding over an initial five year grant to build and operate the network. MCI provided the communications infrastructure, IBM the computing hardware and software for the routers. Merit, a non-profit that ran a computer network that linked the University of Michigan campuses (7), brought experience operating an academic computer network, and gave the whole partnership a collegiate veneer that made it more palatable to NSF and the academics who used NSFNET. Nonetheless, the transfer of operations from NCSA to Merit was a clear first step towards privatization. Traffic flowed through Merit’s backbone from almost a dozen regional networks, from the New York State Education and Research Network (NYSERNet), interconnected at Cornell in Ithaca, to the California Education and Research Federation Network (CERFNet -no relation to Vint), which interconnected at San Diego. Each of these regional networks also internetted with countless local campus networks, as Unix machines appeared by the hundreds in college labs and faculty offices. This federated network of networks became the seed crystal of the modern Internet. ARPANET had connected only well-funded computer researchers at elite academic sites, but by 1990 almost anyone in post-secondary education in the U.S – faculty or student – could get online. There, via packets bouncing from node to node – across their local Ethernet, up into the regional net, then leaping vast distances at light speed via the NSFNET backbone – they could exchange email or pontificate on Usenet with their counterparts across the country. With far more academic sites now reachable via NSFNET than ARPANET, The DCA decommissioned that now-outmoded network in 1990, fully removing the Department of Defense from involvement in civilian networking. Takeoff Throughout this entire period, the number of computers on NSFNET and its affiliated networks – which we may now call the Internet (8) – was roughly doubling each year. 28,000 in December 1987, 56,000 in October 1988, 159,000 in October 1989, and so on. It would continue to do so well into the mid-1990s, at which point the rate slowed only slightly (9). The number of networks on the Internet grew at a similar rate – from 170 in July of 1988 to 3500 in the fall of 1991. The academic community being an international one, many of those networks were overseas, starting with connections to France and Canada in 1988. By 1995, the Internet was accessible from nearly 100 countries, from Algeria to Vietnam (10). Though it’s much easier to count the number of  machines and networks than the number of actual users, reasonable estimates put that latter figure at 10-20 million by end of 1994 (11).  Any historical explanation for this tremendous growth is challenging to defend in the absence of detailed data about who was using the Internet for what, at what time. A handful of anecdotes can hardly suffice to account for the 350,000 computers added to the Internet between January 1991 and January 1992, or the 600,000 in the year after that, or the 1.1 million in the year after that. Yet I will dare to venture onto this epistemically shaky ground, and assert that three overlapping waves of users account for the explosion of the Internet, each with their own reasons for joining, but all drawn by the inexorable logic of Metcalfe’s Law, which indicates that the value (and thus the attractive force) of a network increases with the square of its number of participants. First came the academic users. The NSF had intentionally spread computing to as many universities as possible. Now every academic wanted to be on board, because that’s where the other academics were. To be unreachable by Internet email, to be unable to see and participate in the latest discussions on Usenet, was to risk missing an important conference announcement, a chance to find a mentor, cutting-edge pre-publication research, and more. Under this pressure to be part of the online academic conversation, universities quickly joined onto the regional networks that could connect them to the NSFNET backbone. NEARNET, for example, which covered the six states of the New England region, grew to over 200 members by the early 1990s. At the same time, access began to trickle down from faculty and graduate students to the much larger undergraduate population. By 1993, roughly 70% of the freshman class at Harvard had edu email accounts. By that time the Internet also became physically ubiquitous at Harvard and its peer institutions, which went to considerable expense to wire Ethernet into not just every academic building, but even the undergrad dormitories (12). It was surely not long before the first student stumbled into his or her room after a night of excess, slumped into their chair, and laboriously pecked out an electronic message that they would regret in the morning, whether a confession of love or a vindictive harangue. In the next wave, the business users arrived, starting around 1990. As of that year, 1151 .com domains had been registered. The earliest commercial participants came from the research departments of high-tech companies (Bell Labs, Xerox, IBM, and so on) They, in effect, used the network in an academic capacity. Their employers’ business communications went over other networks. By 1994, however, over 60,000 .com domain names existed, and the business of making money on the Internet had begun in earnest (13).  As the 1980s waned, computers were becoming a part of everyday life at work and home in the U.S, and the importance of a digital presence to any substantial business became obvious. Email offered easy and extremely fast communication with co-workers, clients, and vendors. Mailing lists and Usenet provided both new ways of keeping up to date with a professional community, and new forms of very cheap advertising to a generally affluent set of users. A wide variety of free databases could be accessed via the Internet – legal, medical, financial, and political. New graduates arriving into the workforce from fully-wired campuses also became proselytes for the Internet at their employers. It offered access to a much larger set of users than any single commercial service (Metcalfe’s Law again), and once you paid a monthly fee for access to the net, almost everything else was free, unlike the marginal hourly and per-message fees charged by CompuServe and its equivalents. Early entrants to the Internet marketplace included mail-order software companies like The Corner Store of Litchfield, Connecticut, which advertised in Usenet discussion groups, and The Online Bookstore, an electronic books seller founded over a decade before the Kindle by a former editor at Little, Brown (14). Finally came the third wave of growth, the arrival of ordinary consumers, who began to access the Internet in large numbers in the mid-1990s. By this point Metcalfe’s Law was operating in overdrive. Increasingly, to be online meant to be on the Internet. Unable to afford T1 lines to their homes, consumers almost always accessed the Internet over a dial-up modem. We have already seen part of that story, with the gradual transformation of commercial BBSes into commercial Internet Service Providers (ISPs). This change benefited both the users (whose digital swimming pool suddenly grew into an ocean) and the BBS itself, which could run a much simpler business as an intermediary between the phone system and a T1 on-ramp to the Internet, without maintaining their own services. Larger online services followed a similar pattern. By 1993, all of the major national-scale services in the U.S. – Prodigy, CompuServe, GEnie and upstart America Online (AOL) – offered their 3.5 million combined subscribers the ability to send email to Internet addresses. Only laggard Delphi (with less than 100,000 subscribers), however, offered full Internet access (15). Over the next few years, though, the value of access to the Internet – which continued to grow exponentially – rapidly outstripped that of accessing the services’ native forums, games, shopping and other content. 1996 was the tipping point – by October of that year, 73% of those online reported having used the World Wide Web, compared to just 21% a year earlier (16). The new term “portal” was coined, to describe the vestigial residue of content provided by AOL, Prodigy, and others, to which people subscribed mainly to get access to the Internet.  The Secret Sauce We have seen, then, something of how the Internet grew so explosively, but not quite enough to explain why. Why, in particular, did it become so dominant in the face of so much prior art, so many other services that were striving for growth during the era of fragmentation that preceded it? Government subsidy helped, of course. The funding of the backbone aside, when NSF chose to invest seriously in networking as an independent concern from its supercomputing program, it went all in. The principal leaders of the NSFNET program, Steve Wolff and Jane Caviness, decided that they were building not just a supercomputer network, but a new information infrastructure for American colleges and universities. To this end, they set up the Connections program, which offset part of the cost for universities to get onto the regional nets, on the condition that they provide widespread access to the network on their campus. This accelerated the spread of the Internet both directly and indirectly. Indirectly, since many of those regional nets then spun-off for-profit enterprises using this same subsidized infrastructure  to sell Internet access to businesses. But Minitel had subsidies, too. The most distinct characteristic of the Internet, however, was it layered, decentralized architecture, and attendant flexibility. IP allowed networks of a totally different physical character to share the same addressing system, and TCP ensured that packets were delivered to their destination. And that was all. Keeping the core operations of the network simple allowed virtually any application to be built atop it. Most importantly, any user could contribute new functionality, as long as they could get others to run their software. For example, file transfer (FTP) was among the most common uses of the early Internet, but it was very hard to find servers that offered files of interest for download except by word-of-mouth. So enterprising users built a variety of protocols to catalog and index the net’s FTP servers, such as Gopher, Archie, and Veronica. The OSI stack also had this flexibility, in theory, and the official imprimatur of international organizations and telecommunications giants as the anointed internetworking standard. But possession is nine-tenths of the law, and TCP/IP held the field, with the decisive advantage of running code on thousands, and then millions, of machines. The devolution of control over the application layer to the edges of the network had another important implication. It meant that large organizations, used to controlling their own bailiwick, could be comfortable there. Businesses could set up their own mail servers and send and receive email without all the content of those emails sitting on someone else’s computer. They could establish their own domain names, and set up their own websites, accessible to everyone on the net, but still entirely within their own control.  The World Wide Web – ah – that was the most striking example, of course, of the effects of layering and decentralized control. For two decades, systems from the time-sharing systems of the 1960s through to the likes of CompuServe and Minitel had revolved around a handful of core communications services – email, forums, and real-time chat. But the Web was something new under the sun. The early years of the web, when it consisted entirely of bespoke, handcrafted pages, were nothing like its current incarnation. Yet bouncing around from link to link was already strangely addictive – and it provided a phenomenally cheap advertising and customer support medium for businesses. None of the architects of the Internet had planned for the Web. It was the brainchild of Tim Berners-Lee, a British engineer at the European Organization for Nuclear Research (CERN), who created it in 1990 to help disseminate information among the researchers at the lab. Yet could easily rest atop TCP/IP, and re-use the domain-name system, created for other purposes, for its now-ubiquitous URLs. Anyone with access to the Internet could put up a site, and by the mid-1990s it seemed everyone had – city governments, local newspapers, small businesses, and hobbyists of every stripe.  Privatization In this telling of the story of the Internet’s growth, I have elided some important events, and perhaps left you with some pressing questions. Notably, how did businesses and consumers get access to an Internet centered on NSFNET in the first place – to a network funded by the U.S. government, and ostensibly intended to serve the academic research community? To answer this, the next installment will revisit some important events which I have quietly passed over, events which gradually but inexorably transformed a public, academic Internet into a private, commercial one. [Previous] [Next] Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) John S. Quarterman, The Matrix (1990) Peter H. Salus, Casting the Net (1995) Footnotes Note: The latest version of the WordPress editor appears to have broken markdown-based footnotes, so these are manually added, without links. My apologies for the inconvenience. Abbate, Inventing the Internet, 143. The next time DARPA would initiate a pivotal computing project was with the Grand Challenges for autonomous vehicles of 2004-2005. The most famous project in-between, the billion-dollar AI-based Strategic Computing Initiative of the 1980s, produced a few useful applications for the military, but no core advances applicable to the civilian world.  “1980 National Science Foundation Authorization, Hearings Before the Subcommittee on Science, Researce [sic] and Technology of the Committee on Science and Technology”, 1979.  Smarr, “The Supercomputer Famine in American Universities” (1982)  A snapshot of what this first iteration of NSFNET was like can be found in David L. Mills, “The NSFNET Backbone Network” (1987)  The T1 connection standard, established by AT&T in the 1960s, was designed to carry twenty-four telephone calls, each digitally encoded at 64 kilobits-per-second.  MERIT originally stood for Michigan Educational Research Information Triad. The state of Michigan pitched in $5 million of its own to help its homegrown T1 network get off the ground. Of course, the name and concept of Internet predates the NSFNET. The Internet Protocol dates to 1974, and there were networks connected by IP prior to NSFNET. ARPANET and MILNET we have already mentioned. But I have not been able to find any reference to “the Internet” – a single, all-encompassing world-spanning network of networks – prior to the advent of the three-tiered NSFNET. See this data. Given this trend, how could Quarterman fail to see that the Internet was destined to dominate the world? If the recent epidemic has taught is anything, it is that exponential growth is extremely hard for the human mind to grasp, as it accords with nothing in our ordinary experience.  These figures come from Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996).  See Salus, Casting the Net, 220-221.  Mai-Linh Ton, “Harvard, Connected: The Houses Got Internet,” The Harvard Crimson, May 22, 2017. IAPS, “The Internet in 1990: Domain Registration, E-mail and Networks;” RFC 1462, “What is the Internet;” Resnick and Taylor, The Internet Business Guide, 220.  Resnick and Taylor, The Internet Business Guide, xxxi-xxxiv. Pages 300-302 lay out the pros and cons of the Internet and commercial online services for small businesses.  Statistics from Rosalind Resnick, Exploring the World of Online Services (1993). Pew Research Center, “Online Use,” December 16, 1996. Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Steam Revolution: The Turbine

Incandescent electric light did not immediately snuff out all of its rivals: the gas industry fought back with its own incandescent mantle (which used the heat of the gas to induce a glow in another material) and the arc lighting manufacturers with a glass-enclosed arc bulb.[1] Nonetheless, incandescent lighting grew at an astonishing pace: the U.S. alone had an estimated 250,000 such lights in use by 1885, three million by 1890 and 18 million by the turn of the century.[2] Edison’s electric light company expanded rapidly across the U.S. and into Europe, and its success encouraged the creation of many competitors. An organizational division gradually emerged between manufacturing companies that built equipment and supply companies that used it to generate and deliver power to customers. A few large competitors came to dominate the former industry: Westinghouse Electric and General Electric (formed from the merger of Edison’s company with Thomson-Houston) in the U.S., and the Allgemeine Elektricitäts-Gesellschaft (AEG) and Siemens in Germany. In a sign of its gradual relative decline, Britain produced only a few smaller firms, such as Charles Parsons’ C. A. Parsons and Company—of whom more later.  In accordance with Edison’s early imaginings, manufacturers and suppliers expanded beyond lighting to general-purpose electrical power, especially electric motors and electric traction (trains, subways, and street cars). These new fields opened up new markets for users: electric motors, for example, enabled small-scale manufacturers who lacked the capital for a steam engine or water wheel to consider mechanization, while releasing large-scale factories from the design constraints of mechanical power transmission. They also provided electrical supply companies with a daytime user base to balance the nighttime lighting load. The demands of this growing electric power industry pushed steam engine design to its limits. Dynamos typically rotated hundreds of times a minute, several times the speed of a typical steam engine drive shaft. Engineers overcame this with belt systems, but these gave up energy to friction. Faster engines that could drive a dynamo directly required new high-speed valve control machinery, new cooling and lubrication systems to withstand the additional friction, and higher steam pressures more typical of marine engines than factories. That, in turn, required new boiler designs like the Babcock and Wilcox, which could operate safely at pressures well over 100 psi.[3] A high-speed steam engine (made by the British firm Willans) directly driving a dynamo (the silver cylinder at left). From W. Norris and Ben. H. Morgan, High Speed Steam Engines, 2nd edition (London: P.S. King & Son, 1902), 13. But the requirement that ultimately did in the steam engine was not for speed, but for size. As the electric supply companies evolved into large-scale utilities, providing power and light to whole urban centers and then beyond, they demanded more and more output from their power houses. Even Edison’s Pearl Street station, a tiny installation when looking back from the perspective of the turn of the century, required multiple engines to supply it. By 1903, the Westminster Electric Supply Corporation, which supplied only a part of London’s power, required forty-nine Willans engines in three stations to provide about 9 megawatts of power (an average of about 250 horsepower an engine). But demand continued to grow, and engines grew in response. Perhaps the largest steam engines ever built were the 12,000 horsepower giants designed by Edwin Reynolds and installed in 1901 for the Manhattan Elevated Railway Company and in 1904 for the Interborough Rapid Transit (IRT) subway company. Each of these engines actually consisted of two compound engines grafted together, each with its own high- and low-pressure cylinder, set at right angles to give eight separate impulses per rotation to the spinning alternator (an alternating current dynamo). The combined unit, engine and alternator, weighed 720 tons. But the elevated railway required eight of these monsters, and the IRT expected to need eleven to meet its power needs. The IRT’s power house, with a Renaissance Revival façade designed by famed architect Stanford White, filled a city block near the Hudson River (where it still stands today).[4] The inside of the IRT power house, with five engines installed. Each engine consists of two towers, with a disc-shaped dynamo between them. From Scientific American, October 29th, 1904. How much farther the reciprocating steam engine might have been coaxed to grow is hard to say with certainty, because even as the IRT powerhouse was going up in Manhattan, it was being overtaken by a new power technology based on whirling rotors instead of cycling pistons, the steam turbine. This great advancement in steam power borrowed from developments that had been brewing for decades in its most long-standing rival, water power. Niagara The signature electrical project of the turn of the twentieth century was the Niagara Falls Power Company. The immense scale of its works, its ambitions to distribute power over dozens of miles, its variety of prospective customers, and its adoption of alternating current: all signaled that the era of local, Pearl Street-style direct-current electric light plants was drawing to a close. The tremendous power latent in Niagara’s roaring cataract as it dropped from the level of Lake Erie to that of Lake Ontario was obvious to any observer—engineers estimated its potential horsepower in the millions—the problem was how to capture it, and where to direct it. By the late nineteenth century, several mills had moved to draw off some of its power locally. But Niagara could power thousands of factories, with each having to dig its own canals, tunnels and wheel pits to draw off the small fraction of waterfall that it required. New York State law, moreover, forbid development in the immediate vicinity of the falls to protect its scenic beauty. The solution ultimately decided on was to supply power to users from a small number of large-scale power plants, and the largest nearby pool of potential users lay in Buffalo, about twenty miles away.[5] The Niagara project originated in the 1886 designs of New York State engineer Thomas Evershed for a canal and tunnel lined with hundreds of wheel pits to supply power to an equal number of local factories. But the plan took a different direction in 1889 after securing the backing of a group of New York financiers, headed once again by J.P. Morgan. The Morgan group consulted a wide variety of experts in North America and Europe before settling on an electric power system as the best alternative, despite the unproven nature of long-distance electric power transmission. This proved a good bet: by 1893, Westinghouse had proved in California that it could deliver high-voltage alternating current over dozens of miles, convincing the Niagara company to adopt the same model.[6] Cover of the July 22, 1899 issue of Scientific American with multiple views of the first Niagara Falls Power Company power house and its five-thousand-horsepower turbine-driven generators. By 1904, the company had completed canals, vertical shafts for the fall of water, two powerhouses with a total capacity of 110,000 horsepower, and a mile-long discharge tunnel. They supplied power to local industrial plants, the city of Buffalo, and a wide swath of New York State and Ontario.[7] The most important feature of the power plant for our story, however, were the Westinghouse generators driven by water turbines, each with a capacity of 5,000 horsepower each. As Terry Reynolds, a historian of the waterwheel, put it, this was “more than ten times [the capacity] of the most powerful vertical wheel ever built.”[8] Water turbines had made possible the exploitation of water power on a previously inconceivable scale; appropriately so, for they originated from a hunger on the European continent for a power that could match British steam. Water Turbines The exact point at which a water wheel becomes a turbine is somewhat arbitrary; a turbine is simply a kind of water wheel that has reached a degree of efficiency and power that earlier designs could not approach. But the distinction most often drawn is in terms of relative motion: the water in a traditional wheel pushes the vane along with the same speed and direction as its own flow (like a person pushing a box along the floor). A turbine, on the other hand, creates “motion of the water relative to the buckets or floats of the wheel” in order to extract additional energy: that is to say, it uses the kinetic energy of the water as well as its weight or pressure. That can occur through either impulse (pressing water against the turning vanes), or reaction (shooting water out from them to cause them to turn) but very often includes a combination of both.[9] The exact origins of the horizontal water wheel are unknown, but they had been used in Europe since at least the late Middle Ages. They offered by far the simplest way to drive a millstone, since it could be attached directly to the wheel without any gearing, and remained in wide use in poorer regions of the continent well into the modern period. For centuries, the manufacturers and engineers of Western Europe focused their attention on the more powerful and efficient vertical water wheel, and this type constitutes most of our written record of water technology. Going back to the Renaissance, however, descriptions and drawings can be found of horizontal wheels with curved vanes intended to capture more of the flow of water, and it was the application of rigorous engineering to this general idea that led to the modern turbine. The turbine was in this sense the revenge of the horizontal water wheel, transforming the most low-tech type of water wheel into the most sophisticated. All of the early development of the water turbine occurred in France, which could draw on a deep well of hydraulic theory but could not so easily access coal and iron to make steam as could their British neighbors. Bernard Forest de Belidor, an eighteenth-century French engineer, recorded in his 1737 treatise on hydraulic engineering the existence of some especially ingenious horizontal wheels, used to grind flour at Bascale on the Garonne. They had curved blades fitted inside a surrounding barrel and angled like the blades of a windmill, such that “the water that pushes it works it with the force of its weight composed with the circular motion given to it by the barrel…”[10] Nothing much came of this observation for another century, but Belidor had identified what we could call a proto-turbine, where water not only pushed on the vanes but also glided down through them like the breeze on the arms of a windmill, capturing more of its energy. The horizontal mill wheels observed on the Garonne by Belidor. From Belidor, Architecture hydraulique vol. 1, part 2, Plan 5. In the meantime, theorists came to an important insight. Jean-Charles de Borda, another French engineer (there will be a lot of them in this part of the story), was only a small child in a spa town just north of the Pyrenees when Belidor was writing about water wheels. He studied mathematics and wrote mathematical treatises, became an engineer for the Army and then the Navy, undertook several scientific voyages, fought in the American Revolutionary War, and headed the commission that established the standard length of the meter. In the midst of all this he found some time in 1767 to write up a study on hydraulics for the French Academy of Sciences, in which he articulated the principle that, to extract the most power from a water wheel, the water should enter the machine without shock and leave it without velocity. Lazare Carnot, father of Sadi, restated this principle some fifteen years later, in a treatise that reached a wider audience than de Borda’s paper.[11] Though it is obviously impossible for the water to literally leave the wheel without velocity (for after all without velocity it would never leave), it was through striving for this imaginary ideal that engineers developed the modern, highly efficient water turbine. First came Jean-Victor Poncelet (from now on, if I mention someone, just assume they are French), another military engineer who had accompanied Napoleon’s Grande Armée into Russia in 1812, where he ended up a prisoner of war for two years. After returning home to Metz he became the professor of mechanics at the local military engineering academy. While there he turned his mind to vertical water wheels, and a long-standing tradeoff in their design: undershot wheels, in which the water passed under the wheel, were cheaper to construct but not very efficient, while overshot wheels, where the water came to the top of the wheel and fell on its vanes or buckets, had the opposite attributes. Poncelet combined the virtues of both by applying the principle of de Borda and Carnot. The traditional undershot waterwheel had a maximum theoretical efficiency of 50%, because the ideal wheel turned at half the speed of the water current, allowing the water to leave the vanes of the wheel behind with half of its initial velocity. The appearance of cheap sheet iron had made it possible to substitute metal vanes for wooden, and iron vanes could easily be bent in a curve. By curving the vanes of the wheel just so towards the incoming water, Poncelet found that it would run up the cupped vane, expending all of its velocity, and then fall out of the bottom of the wheel.[12] He published his idea in 1825 to immediate acclaim: “no other paper on water-wheels… had proved so interesting and commanded such attention.”[13] The Poncelet water wheel. Poncelet’s advance hinted at the possibility of a new water-powered industrial future for France. His wheel design soon became a common sight in a France eager to develop its industrial might, and richer in falling water than in reserves of coal. It inspired the Société d’Encouragement pour l’Industrie Nationale, an organization founded in 1801 to push France to be more industrially competitive with Britain, to offer a prize of 6,000 francs to anyone who “would apply on a large scale, in a satisfactory manner, in factories and manufacturing works, the water turbines or wheels with curved blades of Belidor.” The revenge of the horizontal wheel was at hand.[14] Benoît Fourneyron, an engineer at a water-powered ironworks in the hilly country near the Swiss border, claimed the prize in 1833. Even before the announcement of the prize, he had, in fact, already undertaken a deep study of hydraulic theory, reading up on Borda and his successors. He had devised and tested an improved “Belidor-style” wheel, applying the curved metal vanes of Poncelet to a horizontal wheel situated in a barrel-shaped pit, which we can fairly call the first modern water turbine. He went on to install over a hundred of these turbines around Europe, but his signal achievement was the 1837 spinning mill amid the hills of the Black Forest in Baden, which took in a head of water falling over 350 feet and generated sixty horsepower at 80% efficiency. The spinning rotor of the turbine responsible for this power was a mere foot across and weighed only forty pounds. A traditional wheel could neither take on such a head of water nor derive so much power, so efficiently, from such a compact machine.[15] The Fourneyron turbine. The inflowing water, from the reservoir A drives the rotor before emptying from its radial exterior into the basin D. From Eugène Armengaud, Traité théorique et pratique des moteurs hydrauliques et a vapeur, nouvelle edition (Paris: Armengaud, 1858), 279. Steam Turbines The water turbine was thus a far smaller and more efficient machine than its ancestor, the traditional water wheel. Its basic form had existed since at least the time of Belidor, but to achieve an efficient, high-speed design like Fourneyron’s required a body of engineers deeply educated in mathematical physics and a surrounding material culture capable of realizing those mathematical ideas in precisely machined metal. It also required a social context in which there existed demand for more power than traditional sources could ever provide: in this case, a France racing to catch up with rapidly industrializing Britain. The same relation held between the steam turbine and the reciprocating steam engine: the former could be much more compact and efficient, but put much higher demands on the precision of its design and construction. It was no great leap to imagine that steam could drive a turbine in the same way that water did: through the reaction against or impulse from moving steam. One could even look to some centuries-old antecedents for inspiration: the steam-jet reaction propulsion of Heron’s of Alexandria’s whirling “engine” (mentioned much earlier in this history), or a woodcut in Giovanni Branca’s seventeenth-century Le Machine, which showed the impulse of a steam jet driving a horizontal paddlewheel.   But it is one thing to make a demonstration or draw a picture, and another to make a useful power source. A steam turbine presented a far harder problem than a water turbine, because steam was so much less dense than liquid water. Simply transplanting steam into a water turbine design would be like blowing on a pinwheel: it would spin, but generate little power.[16] The difficulty was clear even in the eighteenth century: when confronted in 1784 with reports of a potential rival steam engine driven by the reaction created by a jet of steam, James Watt calculated that, given the low relative density of steam, the jet would have to shoot from the ends of the rotor at 1,300 feet per second, and thus “without god makes it possible for things to move 1000 feet [per second] it can not do much harm.” As historian of steam Henry Dickinson epitomized Watt’s argument, “[t]he analysis of the problem is masterly and the conclusion irrefutable.”[17] Even when future generations of metal working made the speeds required appear more feasible, one could get nowhere with traditional “cut and try” techniques with ordinary physical tools; the problem demanded careful analysis with the precision tools offered by mathematics and physics.[18] Dozens of inventors took a crack at the problem, nonetheless, including another famed steam engine designer, Richard Trevithick. None found success. Though Fourneyron had built an effective water turbine in the 1830s, the first practical steam turbines did not appear until the 1880s: a time when metallurgy and machine tools had achieved new heights (with mass-produced steels of various grades and qualities available) and a time when even the steam engine was beginning to struggle to sate modern society’s demand for power. It first appeared in two places more or less at once: Sweden and Britain. Gustaf de Laval burst from his middle-class background in the Swedish provinces into the engineering school at Uppsala with few friends but many grandiose dreams: he was the protagonist in his own heroic tale of Swedish national greatness, the engineering genius who would propel Sweden into the first rank of great nations. He lived simultaneously in grand style and constant penury, borrowing from his visions for an ever more prosperous tomorrow to live beyond his means of today. In the 1870s, while working a day job at a glassworks, he developed two inventions based on centrifugal force generated by a rapidly spinning wheel. The first, a bottle-making machine, flopped, but the second, a cream separator, became the basis for a successful business that let him leave his day job behind.[19] Then, in 1882 he patented a turbine powered by a jet of steam directed at a spinning wheel. De Laval claimed that his inspiration came from seeing a nozzle used for sandblasting at the glassworks come loose and whip around, unleashing its powerful jet into the air; it is also not hard to see some continuity in his interest in high-speed rotation. De Laval used his whirling turbines to power his whirling cream separators, and then acquired an electric light company, giving himself another internal customer for turbine power.[20] Though superficially similar to de Branca’s old illustration, de Laval’s machine was far more sophisticated. As Watt had calculated a century earlier, the low density of steam demanded high rotational speeds (otherwise the steam would escape from the machine having given up very little energy to the wheel) and thus a very high-velocity jet: de Laval’s steel rotor spun at tens of thousands of rotations per minute in an enclosed housing. A few years later he invented an hourglass-shaped nozzle to propel the steam jet to supersonic speeds, a shape that is still used in rocket engines for the same purpose today. Despite the more advanced metallurgy of the late-nineteenth century, however, de Laval still ran up against its limits: he could not run his turbine at the most efficient possible speed without burning out his bearings and reduction gear, and so his turbines didn’t fully capture their potential efficiency advantage over a reciprocating engine.[21] Cutaway view of a de Laval turbine, from William Ripper, Heat Engines (London: Longmans, Green, 1909), 234. Meanwhile, the British engineer Charles Parsons came up with a rather different approach to extracting energy from the steam, which didn’t require such rapid rotation. Whereas De Laval strove up from the middle class, Parsons came from the highest gentry. Son of the third Earl of Rosse, he grew up in a castle in Ireland, with grounds that included a lake and a sixty-foot-long telescope constructed to his father’s specifications. He studied at home under, Robert Ball, who later became the Astronomer Royal of Ireland, then went on to graduate from Cambridge University in 1877 as eleventh wrangler—the eleventh best in his class on the mathematics exams.[22] Despite his noble birth, Parsons appeared determined to find his own way in the world. He apprenticed himself at Elswick Works, a manufacturer of heavy construction and mining equipment and military ordnance in Newcastle on Tyne. He spent a couple years with a partner in Leeds trying to develop rocket-powered torpedoes before taking up as a junior partner at another heavy engineering concern, Clarke Chapman in Gateshead (back on the River Tyne).[23] His new bosses directed Parsons away from torpedoes toward the rapidly growing field of electric lighting. He turned to the turbine concept in search of a high-speed rotor that could match the high rotational speeds of a dynamo. Parsons came up with a different solution for the density problem than Laval’s. Rather than try to extract as much power as possible from the steam jet with one extremely fast rotor, he would send the steam through a series of rotors arranged horizontally. They would then not have to spin so quickly (though Parson’s first prototype still ran at 18,000 rotations per minute), and each could extract a bit of energy from the steam as it flowed through the turbine, dropping in pressure. This design extended the two-or three- stages of pressure reduction in a multi-cylinder steam engine into a continuous flow across a dozen or more rotors. Parsons’ approach created some new challenges (keeping the long, rapidly spinning shaft from bowing too far in one direction or the other, for example) but ultimately most future steam turbines would copy this elongated form.[24] Parson’s original prototype turbine and dynamo, with the top removed. Steam entered at the center and exited from both ends, which eliminated the need to deal with “end thrust,” a force pushing on one end of the turbine. From Dickinson, A Short History of the Steam Engine, plate vii. The Rise of Turbines Parsons soon founded his own firm to exploit the turbine. Because it has far less inherent friction than the piston of a traditional engine, and because none of its parts have to touch both hot and cold steam, a turbine had the potential to be much more efficient, but they didn’t start out that way. So his early customers were those who cared mainly about the smaller size of turbines: shipbuilders looking to put in electric lighting without adding too much weight or using too much space in the hull. In other applications reciprocating engines still won out.[25] Further refinements, however, allowed turbines to start to supplant reciprocating engines in electrical systems more generally: more efficient blade designs, the addition of a regulator to ensure that steam entered the turbine only at full pressure, the superheating of steam at one end and the condensing of it at the other to maximize the fall in temperature across the entire engine. Turbo-generators—electrical dynamos driven by turbines—began to find buyers in the 1890s. By 1896, Parsons could boast that a two-hundred-horsepower turbine his firm constructed for a Scottish electric power station ran at 98% of its ideal efficiency, and Westinghouse had begun to develop turbines under license in the United States.[26] Cutaway view of a fully developed Parsons-style turbine. Steam enters at left (A) and passes through the rotors to the right. From Ripper, Heat Engines, 241. At the same time, Parsons was pushing for the construction of ships with turbine powerplants, starting with the prototype Turbinia, which drove nine propellers with three turbines and achieved a top speed of nearly forty miles-per-hour. Suitably impressed, the British Admiralty ordered turbine-powered destroyers (starting with Viper in 1897), but the real turning point came in 1906 with the completion of the first turbine-driven battleship (Dreadnought) and transatlantic steamers (Lusitania and Muaretania), all supplied with Parsons powerplants.[27] HMS Dreadnought was remarkable not only for her armament and armor, but also for her speed of 21 knots (24 miles per hour), made possible by Parsons turbines. The very first steam turbines had demonstrated their advantage over traditional engines in size; a further decade-and-a-half of development allowed them to realize their potential advantages in efficiency; and now these massive vessels made clear their third advantage: the ability to scale to enormous power outputs. As we saw, the monster steam engines at the subway power house in New York could generate 12,000 horsepower, but the turbines aboard Lusitiania churned out half again as much, and that was far from the limit of what was possible. In 1915, the Interborough Rapid Transit Company, facing ever-growing demand for power with the addition of a third (express) track to its elevated lines, installed three 40,000 horsepower turbines for electrical generation, obsoleting Reynolds’ monster engines of a decade earlier. By the 1920s, 40,000 horsepower turbines were being built in the U.S., and burning half as much coal per watt of power generated as the most efficient reciprocating engines.[28] Parsons lived to see the triumph of his creation. He spent his last years cruising the world, and preferred to spend the time between stops talking shop with the crew and engineers rather than lounging with other wealthy passengers. He died in 1931, at age 76, in the Caribbean while aboard ship on the (turbine-powered of course) Duchess of Richmond.[29] Meanwhile, power usage shifted towards electricity, made widely available by the growth of steam and water turbines and the development of long-distance power transmission, not by traditional steam engines. Niagara was just a foretaste of the large-scale water power projects made feasible by the newly found capacity to transmit that power wherever it was needed: the Hoover Dam and Tennessee Valley Authority in the U.S., the Rhine power dams in Europe, and later projects intended to spur the modernization of poorer countries, from the Aswan Dam on the Nile and the Gezhouba Dam on the Yangtze. In regions with easy access to coal, however, steam turbines provided the majority of all electric power until far in the twentieth century. Cheap electricity transformed industry after industry. By 1920, manufacturing consumed half of the electricity produced in the U.S., mainly through dedicated electric motors at each tool, eliminating the need for the construction and maintenance of a large, heavy steam engine and for bulky and friction-heavy shafts and belts to transmit power through the factory. The capital barriers to starting a new manufacturing plant thus dropped substantially along with the recurring cost of paying for power, and the way was opened to completely rethink how manufacturing plants were built and operated. Factories became cleaner, safer, and more pleasant to work in, and the ability to organize machines according to the most efficient work process rather than the mechanical constraints of power delivery produced huge dividends in productivity.[30] A typical pre-electricity factory power distribution system, based on line shafts and belts (in this case driving power looms). All the machines in the factory have to be organized around the driveshafts. [Z22, CC BY-SA 3.0] The 1910 Ford Highland Park plant represents a hybrid stage on the way to full electrification of every machine; the plant still had overhead line shafts (here for milling engine blocks), but each area was driven by a local electric motor, allowing for a much more flexible arrangement of machinery. By that time, the heyday of the piston-driven steam engine was over. For large-scale installations, it could no longer compete with turbines (whether powered by liquid water or steam). At the same time, feisty new competitors, diesel and gasoline engines, were gnawing away at its share of the lower horsepower market. The warning shot fired by the air engine had finally caught up to steam. It could not outrun thermodynamics, and the incredibly energy-dense new fuel source that had come bubbling up out of the ground: rock oil, or petroleum.

Read more
ARPANET, Part 3: The Subnet

With ARPANET, Robert Taylor and Larry Roberts intended to connect many different research institutions, each hosting its own computer, for whose hardware and software it was wholly responsible. The hardware and software of the network itself, however, lay in a nebulous middle realm, belonging to no particular site. Over the course of the years 1967-1968, Roberts, head of the networking project for ARPA’s Information Processing Techniques Office (IPTO), had to determine who should build and operate the network, and where the boundary of responsibility should lie between the network and the host institutions. The Skeptics The problem of how to structure the network was at least as much political as technical. The principal investigators at the ARPA research sites did not, as a body, relish the idea of ARPANET.  Some evinced a perfect disinterest in ever joining the network; few were enthusiastic. Each site would have to put in a large amount of effort to in order to let others share its very expensive, very rare computer. Such sharing had manifest disadvantages (loss of a precious resource), while its potential advantages remained uncertain and obscure. The same skepticism about resource sharing had torpedoed the UCLA networking project several years earlier. However, in this case, ARPA had substantially more leverage, since it had directly paid for all those precious computing resources, and continued to hold the purse strings of the associated research programs. Though no direct threats were ever made, no “or else,” issued, the situation was clear enough – one way or another ARPA would build its network, to connect what were, in practice, still its machines. Matters came to a head at a meeting of the principal investigators in Ann Arbor, Michigan, in the spring of 1967. Roberts laid out his plan for a network to connect the various host computers at each site. Each of the investigators, he said, would fit their local host with custom networking software, which it would use to dial up other hosts over the telephone network (this was before Roberts had learned about packet-switching). Dissent and angst ensued. Among the least receptive were the major sites that already had large IPTO-funded projects, MIT chief among them. Flush with funding for the Project MAC time-sharing system and artificial intelligence lab, MIT’s researchers saw little advantage to sharing their hard-earned resources with rinky-dinky bit players out west. Regardless of their stature, moreover, every site had certain other reservations in common. They each also had their own unique hardware and software, and it was difficult to see how they could even establish a simple connection with one another, much less engage in real collaboration. Just writing and running the networking software for their local machine would also eat up a significant amount of time and computer power. It was ironic yet surprisingly fitting that the solution adopted by Roberts to these social and technical problems came from Wes Clark, a man who regarded both time-sharing and networking with distaste. Clark, the quixotic champion of personal computers for each individual, had no interest in sharing computer resources with anyone, and kept his own campus, Washington University in St. Louis, well away from ARPANET for years to come. So it is perhaps not surprising that he came up with a network design that would not add any significant new drain on each site’s computing resources, nor require those sites to spend a lot of effort on custom software. Clark proposed setting up a mini-computer at each site which would handle all the actual networking functions. Each host would have to understand only how to connect to its local helpmate (later dubbed an Interface Message Processor, or IMP), which would then route on the message so that it reached the corresponding IMP at the destination. In effect, he proposed that ARPA give an additional free computer to each site, which would absorb most of the resource costs of the network. At a time when computers were still scarce and very dear, the proposal was an audacious one. Yet with the recent advent of mini-computers that cost just tens of thousands of dollars rather than hundreds, it fell just this side of feasible.1 While alleviating some of the concerns of the principal investigators about a network tax on their computer power, the IMP approach also happened to solve another political problem for ARPA. Unlike any other ARPA project to date, the network was not confined to a single research institution where it could be overseen by a single investigator. Nor was ARPA itself equipped to directly build and manage a large-scale technical project. It would have to hire a third party to do the job. The presence of the IMPs would provide a clear delineation of responsibility between the externally-m network and the locally-managed host computer. The contractor would control the IMPs and everything between them, while the host sites would each remain fully (and solely) responsible for the hardware and software on their own computer. The IMP Next, Roberts had to choose that contractor. The old-fashioned Licklider approach of soliciting a proposal directly from a favored researcher wouldn’t do in this case. The project would have to be put up for public bid like any other government contract. It took until July of 1968 for Roberts to prepare the final the details of the request for bids. About a half year had elapsed since the final major technical piece of the puzzle fell into place, with the revelation of packet-switching at the Gatlinburg conference. Two of the largest computer manufacturers, Control Data Corporation (CDC) and International Business Machines (IBM), immediately bowed out, since they had no suitable low-cost minicomputer to serve as the IMP. Honeywell DDP-516 Among the major remaining contenders, most chose Honeywell’s new DDP-516 computer, though some plumped instead for the Digital PDP-8. The Honeywell was especially attractive because it featured an input/output interface explicitly design to interact with real-time systems, for applications like controlling industrial machinery. Communications, of course, required similar real-time precision – if an incoming message were missed because the computer was busy doing other work, there was no second chance to capture it. By the end of the year, after strongly considering Raytheon, Roberts offered the job to the growing Cambridge firm of Bolt, Beranek and Newman. The family tree of interactive computing, was, at this date, still extraordinarily ingrown, and in choosing BBN Roberts might reasonably have been accused of a kind of nepotism. J.C.R. Licklider had brought  interactive computing to BBN before leaving to serve as the first director of IPTO, seed his intergalactic network, and mentor men like Roberts. Without Lick’s influence, ARPA and BBN would have been neither interested in nor capable of handling the ARPANET project. Moreover, the core of the team assembled by BBN to build the IMP came directly or indirectly from Lincoln Labs: Frank Heart (the team’s leader), Dave Walden, Will Crowther, and Severo Ornstein. Lincoln, of course, is where Roberts himself did his graduate work, and where a chance collision with Wes Clark had first sparked Lick’s excitement about interactive computing. But cozy as the arrangement may have seemed, in truth the BBN team was as finely tuned for real-time performance as the Honeywell 516. At Lincoln, they worked on computers that interfaced with radar systems, another application where data would not wait for the computer to be ready. Heart, for example, had worked on the Whirlwind computer as a student as far back as 1950, joined the SAGE project, and spent a total of fifteen years at Lincoln Lab. Ornstein had worked on the SAGE cross-telling protocol, for handing off radar track records from one computer to another, and later on Wes Clark’s LINC, a computer designed to support scientists directly in the laboratory, with live data. Crowther, now best known as the author of Colossal Cave Adventure, spent ten years building real-time systems at Lincoln, including the Lincoln Experimental Terminal, a mobile satellite communications station with a small computer to point the antenna and process the incoming signals.2 The IMP team at BBN. Frank Heart is the older man at center. Ornstein is on the far right, next to Crowther. The IMPs were responsible for understanding and managing the routing and delivery of messages from host to host. The hosts could deliver up to 8000 bytes at a time to their local IMP, along with a destination address. The IMP then sliced this into smaller packets which were routed independently to the destination IMP, across 50 kilobit-per-second lines leased from AT&T. The receiving IMP reassembled the pieces and delivered the complete message to its host. Each IMP kept a table that tracked which of their neighbors offered fastest route to reach each possible destination. This was updated dynamically based on information received from those neighbors, including whether they appeared to be unavailable (in which case the delay in that direction was effectively infinite). To meet the speed and throughput requirements specified by Roberts for all of this processing, Heart’s team crafted little poems in code. The entire operating program for the IMP required only about 12,000 bytes; the portion that maintained the routing tables only 300.3 The team also took several precautions to address the fact that it would be infeasible to have maintenance staff on site with every IMP. First, they equipped each computer with remote monitoring and control facilities. In addition to an automatic restart function that would kick in after power failure, the IMPs were programmed to be able to restart their neighbors by sending them a fresh instance of their operating software. To help with debugging and analysis, an IMP could be instructed to start taking snapshots of its state at regular intervals. The IMPs would also honor a special ‘trace’ bit on each packet, which triggered additional, more detailed logs. With these capabilities, many kinds of problems could be addressed from the BBN office, which acted as a central command center from which the status of the whole network could be overseen. Second, they requisitioned from Honeywell the military-grade version of the 516 computer, equipped with a thick casing to protect it from vibration and other environmental hazards. BBN intended this primarily as a “keep out” sign for curious graduate students, but nothing delineated the boundary between the hosts and the BBN-operated subnet as visibly as this armored shell. The first of these hardened cabinets, about the size of a refrigerator, arrived on site at the University of California, Los Angeles (UCLA) on August 30, 1969, just 8 months after BBN received the contract. The Hosts Roberts decided to start the network with four hosts – in addition to UCLA, there would be an IMP just up the coast at the University of California, Santa Barbara (UCSB), another at Stanford Research Institute (SRI) in northern California, and the last at the University of Utah. All were scrappy West Coast institutions looking to establish themselves in academic computing. The close family ties also continued, as two of the involved principal investigators, Len Kleinrock at UCLA and Ivan Sutherland at the University of Utah, were also Roberts’ old office mates from Lincoln Lab. Roberts also assigned two of the sites special functions within the network.  Doug Englebart of SRI had volunteered as far back as the 1967 principals meeting to set up a Network Information Center. Leveraging SRI’s sophisticated on-line information retrieval system, he would compile the telephone directory, so to speak, for ARPANET: collating information about all the resources available at the various host sites and making it available to everyone on the network. On the basis of Kleinrock’s expertise in analyzing network traffic, meanwhile, Roberts designated UCLA as the Network Measurement Center (NMC). For Kleinrock and UCLA, ARPANET was to serve not only as a practical tool but also as an observational experiment, from which data could be extracted and generalized to learn lessons that could be applied to improve the design of the network and its successors. But more important to the development of ARPANET than either of these formal institutional designations was a more informal and diffuse community of graduate students called the Network Working Group (NWG). The sub-net of IMPs allowed any host on the network to reliably deliver a message to any other; the task taken on by the Network Working Group was to devise a common language or set of languages that those hosts could use to communicate. They called these the “host protocols.” The word protocol, a borrowing from diplomatic language, was first applied to networks by Roberts and Tom Marill in 1965, to describe both the data format and the algorithmic steps that determine how two computers communicate with one another. The NWG, under the loose, de facto leadership of Steve Crocker of UCLA, began meeting regularly in the spring of 1969, about six months in advance of the delivery of the first IMP. Crocker was born and raised in the Los Angeles area, and attended Van Nuys High School, where he was a contemporary of two of his later NWG collaborators, Vint Cerf and Jon Postel4. In order to record the outcome of some of the group’s early discussions, Crocker developed one of the keystones of the ARPANET (and future Internet) culture, the “Request for comments” (RFC). His RFC 1, published April 7, 1969 and distributed to the future ARPANET sites by postal mail, synthesized the NWG’s early discussions about how to design the host protocol software. In RFC 3, Crocker went on to define the (very loose) process for all future RFCs: Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a NWG note is one sentence. …we hope to promote the exchange and discussion of considerably less than authoritative ideas. Like a “Request for quotation” (RFQ), the standard way of requesting bids for a government contract, an RFC invited responses, but unlike the RFQ, the RFC also invited dialogue. Within the distributed NWG community anyone could submit an RFC, and they could use the opportunity to elaborate on, question, or criticize a previous entry. Of course, as in any community, some opinions counted more than others, and in the early days the opinion of Crocker and his core group of collaborators counted for a great deal. In fact by July 1971, Crocker had left UCLA (while still a graduate student) to take up a position as a Program Manager at IPTO. With crucial ARPA research grants in his hands, he wielded undoubted influence, intentionally or not. Jon Postel, Steve Crocker, and Vint Cerf – schoolmates and NWG collaborators – in later years. The NWG’s initial plan called for two protocols. Remote login (or Telnet) would allow one computer to act like a terminal attached to the operating system of another, extending the interactive reach of any ARPANET time-sharing system across thousands of miles to any user on the network. The file transfer protocol (FTP) would allow one computer to transfer a file, such as a useful program or data set, to or from the storage system of another. At Roberts’ urging, however, the NWG added a third basic protocol beneath those two, for establishing a basic link between two hosts. This common piece was known as the Network Control Program (NCP). The network now had three conceptual layers of abstraction – the packet subnet controlled by the IMPs at the bottom, the host-to-host connection provided by NCP in the middle, and application protocols (FTP and Telnet) at the top. The Failure? It took until August of 1971 for NCP to be fully defined and implemented across the network, which by then comprised fifteen sites. Telnet implementations followed shortly thereafter, with the first stable definition of FTP arriving a year behind, in the summer of 1972. If we consider the state of ARPANET in this time period, some three years after it was first brought on-line, it would have to be considered a failure when measured against the resource-sharing dream envisioned by Licklider and carried into practical action by his protégé, Robert Taylor. To begin with, it was hard to even find out what resources existed on the network which one could borrow. The Network Information Center used a model of voluntary contribution – each site was expected to provided up-to-date information about its own data and programs. Although it would have collectively benefited the community for everyone to do so, each individual site had little incentive to advertise its resources and make them accessible, much less provide up-to-date documentation or consultation. Thus the NIC largely failed to serve as an effective network directory. Probably it’s most important function in those early years was to provide electronic hosting for the growing corpus of RFCs. Even if Alice at UCLA knew about a useful resource at MIT, however, an even more serious obstacle intervened. Telnet would get Alice to the log-in screen at MIT but no further. For Alice to actually access any program on the MIT host, she would have to make an off-line agreement with MIT to get an account on their computer, usually requiring her to fill out paperwork at both institutions and arrange for funding to pay MIT for the computer resources used. Finally, incompatibilities between hardware and system software at each site meant that there was often little value to file transfer, since you couldn’t execute programs from remote sites on your own computer. Ironically, the most notable early successes in resource sharing were not in the domain of interactive time-sharing that ARPANET was built to support, but in large-scale, old-school, non-interactive data-processing. UCLA added their underutilized IBM 360/91 batch-processing machine to the network and provided consultation by telephone to support remote users, and thus managed to significantly supplement the income of the computer center. The ARPA-funded ILLIAC IV supercomputer at the University of Illinois and the Datacomputer at the Computer Corporation of America in Cambridge also found some remote clients on ARPANET.5 None of these applications, however, came close to fully utilizing the network. In the fall of 1971, with fifteen host computers online, the network in total carried about 45 million bits of traffic per site per day, an average of 520 bits-per-second on a network of AT&T leased lines with a capacity of 50,000 bits-per-second each.6 Moreover, much of this was test traffic generated by the Network Measurement Center at UCLA. The enthusiasm of a few early adopters aside (such as Steve Carr, who made daily use of the PDP-10 at the University of Utah from Palo Alto7), not much was happening on ARPANET.8 But ARPANET was soon saved from any possible accusations of stagnation by yet a third application protocol, a little something called email. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996)  

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy:  One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1 In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2 Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff. Acceptable Use Wolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3 In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community. However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over. This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates.  From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis.  Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.   Dual-Use Networks Wolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control.  This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it.  The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4 Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.   A For-Profit Backbone MCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5 The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement. T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet.  Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET. It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume. PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on. Divestiture Rick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access. But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11 The Break-up Though Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12 Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone. When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber.  In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts. The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets.  AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone. However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.    The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries.  This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like.  Second Time Isn’t The Charm Prior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.   Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side.  The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S.  To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable.  The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it: allowed the RBOCs to compete in long-distance telephone markets, lifted restrictions forbidding the same entity from owning both broadcasting and cable services, axed the rules that prevented concentration of radio station ownership. The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network.  The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly.  Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services.  How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards.  The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home.  Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15 During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course.  Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward.  [Previous] [Next] Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000. “Remarks by Vice President Al Gore at National Press Club“, December 21, 1993. Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth. Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year. To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math. Office of Inspector General, “Review of NSFNET,” March 23, 1993. Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27. Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990. John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991. Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996. The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem. The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”. Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020. Goldstein, The Great Telecom Meltdown, 145. The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software. Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) Shane Greenstein, How the Internet Became Commercial (2015) Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018) Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007) Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Steamships, Part I: Crossing the Atlantic

For much of this story, our attention has focused on events within the isle of Great Britain, and with good reason: primed by the virtuous cycle of coal, iron, and steam, the depth and breadth of Britain’s exploitation of steam power far exceeded that found anywhere else, for roughly 150 years after the groaning, hissing birth cry of steam power with the first Newcomen engine. American riverboat traffic stands out as the isolated exception. But Great Britain, island though it was, did not stand aloof from the world. It engaged in trade and the exchange of ideas, of course, but it also had a large and (despite occasional setbacks) growing empire, including large possessions in Canada, South Africa, Australia, and India. The sinews of that empire necessarily stretched across the oceans of the world, in the form of a dominant navy, a vast merchant fleet, and the ships of the East India Company, which blurred the lines of military and commercial power: half state and half corporation. Having repeatedly bested all its would-be naval rivals—Spain, the Netherlands, and France—Britain had achieved an indisputable dominance of the sea. Testing the Waters The potential advantages of fusing steam power with naval power were clear: sailing ships were slaves to the whims of the atmosphere. A calm left them helpless, a strong storm drove them on helplessly, and adverse winds could trap them in port for days on end. The fickleness of the wind made travel times unpredictable and could steal the opportunity for a victorious battle from even the strongest fleet. In 1814, Sir Walter Scott took a cruise around Scotland, and the vicissitudes of travel by sail are apparent on page after page of his memoirs:  4th September 1814… Very little wind, and that against us; and the navigation both shoally and intricate. Called a council of war; and after considering the difficulty of getting up to Derry, and the chance of being windbound when we do get there, we resolve to renounce our intended visit to that town… 6th September 1814… When we return on board, the wind being unfavourable for the mouth of Clyde, we resolve to weigh anchor and go into Lamlash Bay. 7th September, 1814 – We had amply room to repent last night’s resolution, for the wind, with its usual caprice, changed so soon as we had weighed anchor, blew very hard, and almost directly against us, so that we were beating up against it by short tacks, which made a most disagreeable night…[1] As it had done for power on land, as it had done for river travel, so steam could promise to do for sea travel: bring regularity and predictability, smoothing over the rough chaos of nature. The catch lay in the supply of fuel. A sailing ship, of course, needed only the “fuel” it gathered from the air as it went along. A riverboat could easily resupply its fuel along the banks as it travelled. A steamship crossing the Atlantic would have to bring along its whole supply. Plan of the Savannah. It is evident that she was designed as a sailing ship, with the steam engine and paddles as an afterthought. Early attempts at steam-powered sea vessels bypassed this problem by carrying sails, with the steam engine providing supplementary power. The American merchant ship Savannah crossed the Atlantic to Liverpool in this fashion in 1819. But the advantages of on-demand steam power did not justify the cost of hauling an idle engine and its fuel across the ocean. Its owners quickly converted the Savannah back to a pure sailing ship.[2] MacGregor Laird had a better-thought-out plan in 1832 when he dispatched the two steamships built at his family’s docks, Quorra and Alburkah, along with a sailing ship, for an expedition up the River Niger to bring commerce and Christianity to central Africa. Laird’s ships carried sails for the open ocean and supplied themselves regularly with wooden fuel when coasting near the shore. The steam engines achieved their true purpose once the little task force reached the river, allowing the ships to navigate easily upstream.[3] Brunel Laird’s dream of transforming Africa ended in tatters, and in the death of most of his crew. But Laird himself survived, and he and his homeland would both have a role to play in the development of true ocean-going steamships. Laird, like the great Watt himself, was born in Greenock, on the Firth of Clyde, and Britain’s first working commercial steamboats originated on the Clyde, carrying passengers among Glasgow, Greenock, Helensburgh, and other towns. Scott took passage on such a ferry from Greenock to Glasgow in the midst of his Scottish journey, and the contrast is stark in his memoirs between his passages at sea and the steam transit on the Clyde that proceeded “with a smoothness of motion which probably resembles flying.”[4] The shipbuilders of the Clyde, with iron and coal closet a hand, would make such smooth, predictable steam journeys ever more common in the waters of and around Britain.  By 1822, they had already built forty-eight steam ferries of the sort on which Scott had ridden; in the following decade ship owners extended service out into the Irish Sea and English Channel with larger vessels, like David Napier’s 240-ton, 70-horsepower Superb and 250-ton and 100-horsepower Majestic.[5] Indeed, the most direct path to long-distance steam travel lay in larger hulls. Because of the buoyancy of water, steamships did not suffer rocket-equation-style negative returns on fuel consumption with increasing size. As the hull grew, its capacity to carry coal increased in proportion to its volume, while the drag the engines had to overcome (and thus the size of engine required) increased only in proportion to the surface area. Mark Beaufoy, a scholar of many pursuits but with a deep interest in naval matters, had shown this decisively in a series of experiments with actual hulls in water, published posthumously by his son in 1834.[6] In the late 1830s, two competing teams of British financiers, engineers, and naval architects emerged, racing to be the first to take advantage of this fact by creating a large enough steamship to make transatlantic steam travel technically and commercially viable. In a lucky break for your historian, the more successful team was led by the more vibrant figure, Isambard Kingdom Brunel: even his name oozes character. (His rival’s name, Junius Smith, begins strong but ends pedestrian.) Brunel’s unusual last name came from his French father, Marc Brunel; his even more unusual middle name came from his English mother, Sophia Kingdom; and his most unusual first name descends from some Frankish warrior of old.[7] The elder Brunel came from a prosperous Norman farming family. A second son, he was to be educated for the priesthood, but rebelled against that vocation and instead joined the navy in 1786. Forced to flee France in 1793 due to his activities in support of the royalist cause, he worked for a time as a civil engineer in New York before moving to England in 1799 to develop a mechanized process for churning out pulley blocks for the British navy with one of the great rising engineers of the day, Henry Maudslay.[8] The most famous image of Brunel, in front of the chains of his (and the world’s) largest steamship design in 1857. Young Isambard was born in 1806, began working for his father in 1822, and got the railroad bug after riding the Liverpool and Manchester line in 1831.  The Great Western Railway (GWR) company named Brunel as chief engineer in 1833, when he just twenty-seven years old. The GWR originated with a group of Bristol merchants who saw the growth of Liverpool, and feared that without a railway link to central Britain they would lose their status as the major entrepôt for British trade with the United States. It spanned the longest route of any railway to date, almost 120 miles from London to Bristol, and under Brunel’s guidance the builders of the GWR leveled, bridged, and tunneled that route at unparalleled cost). Brunel insisted on widely spaced rails (seven feet apart) to allow a smooth ride at high speed, and indeed GWR locomotives achieved speeds of sixty miles-per-hour, with average speeds of over forty miles-per-hour over long distances, including stops. Though the broad-gauge rails Brunel stubbornly fought for are long gone, the iron-ribbed vaults of the train sheds he designed for each terminus—Paddington Station in London and Temple Meads in Bristol—still stand and serve railroad traffic today.[9] The Great Western Railway " data-medium-file="https://cdn.accountdigital.net/Fhf3soIAhhg2rN0oqVUZMO0J_BSv" data-large-file="https://cdn.accountdigital.net/Fijn2JZJiY1iiPlxXFJdkKowbSRt?w=739" loading="lazy" width="1024" height="640" src="https://cdn.accountdigital.net/FhOeLf78Au4ONW9H2uRKenokq84m" alt="" class="wp-image-14501" srcset="https://cdn.accountdigital.net/FhOeLf78Au4ONW9H2uRKenokq84m 1024w, https://cdn.accountdigital.net/Fi_GgFaxxSyF_JLeBaV4adatJV4f 150w, https://cdn.accountdigital.net/Fhf3soIAhhg2rN0oqVUZMO0J_BSv 300w, https://cdn.accountdigital.net/Ft7LTlTqReE0HzUJR2_XF8PFniC4 768w, https://cdn.accountdigital.net/Fijn2JZJiY1iiPlxXFJdkKowbSRt 1154w" sizes="(max-width: 1024px) 100vw, 1024px">An engraving of Temple Mead, Bristol terminus of the Great Western Railway. According to legend, Brunel’s quest to build a transatlantic steamer began with an off-hand quip at a meeting of the Great Western directors in October 1835.[10] Someone grumbled over the length of the railway line, Brunel said something to the effect of: “Why not make it longer, and have a steamboat to go from Bristol to New York?” Though perhaps intended as a joke, Brunel’s remark spoke to the innermost dreams of the Bristol merchants, to be the indispensable link between England and America.  One of them, Thomas Guppy, decided to take the idea seriously, and convinced Brunel to do the same. Brunel, never lacking in self-confidence, did not doubt that his heretofore landbound engineering skills would translate to a watery milieu, but just in case he pulled Christopher Claxton (a naval officer) and William Patterson (a shipbuilder) in on the scheme. Together they formed a Great Western Steam Ship Company.[11] The Race to New York Received opinion still held that a direct crossing by steam from England to New York, of over 3,000 miles, would be impossible without refueling. Dionysius Lardner took to the hustings of the scientific world to pronounce that opinion. Dionysius Lardner, Brunel’s nemesis. One of the great enthusiasts and promoters of the railroad, Lardner was nonetheless a long-standing opponent of Brunel’s: in 1834 he had opposed Brunel’s route for the Great Western railway on the grounds that the gradient of Box Hill tunnel would cause trains to reach speeds of 120 miles-per-hour and thus suffocate the passengers.[12] He gave a talk to the British Association for the Advancement of Science in August 1836 deriding the idea of a Great Western Steamship, asserting that “[i]n proportion as the capacity of the vessel is increased, in the same ratio or nearly so must the mechanical power of the engines be enlarged, and the consumption of fuel augmented,” and that therefore a direct trip across the Atlantic would require a far more efficient engine than had ever yet been devised.[13] The Dublin-born Lardner much preferred his own scheme to drive a rail line across Ireland and connect the continents by the shortest possible water route: 2,000 miles from Shannon to Newfoundland. Brunel, however, firmly believed that a large ship would solve the fuel problem. As he wrote in a preliminary report to the company in 1836, certainly drawing on Beaufoy’s work: “…the tonnage increases as the cubes of their dimensions, while the resistance increases about as their squares; so that a vessel of double the tonnage of another, capable of containing an engine of twice the power, does not really meet with double the resistance.”[14] He, Patterson and Claxton agreed to target a 1400 ton, 400 horsepower ship. They would name her, of course, Great Western. In the post-Watt era, Britain boasted two great engine-building firms: Robert Napier’s in Glasgow in the North, and Maudslay’s in London in the south. After the death of Henry Maudslay, Marc Brunel’s former collaborator, in 1831, the business’ ownership passed to his sons. But they lacked their father’s brilliance; the key to  the firm’s future lay with the partner he had also bequeathed  to them, Joshua Field. Brunel and his father both had ties to Maudslay, and so they tapped Field to design the engine for their great ship. Field chose a “side-lever” engine design, so-called because a horizontal beam on the side of the engine rocking on a central pivot delivered power from the piston to the paddle wheels. This was the standard architecture for large marine engines, because it allowed the engine to be mounted deep in the hull, avoiding deck obstructions and keeping the ship’s center of gravity low. Field, however, added several novel features of his own devising. The most important of them was the spray condenser, which recycled some of the engine’s steam for re-use as fresh water for the boiler. This ameliorated the second-most pressing problem for long-distance steamships: the build-up of scale in the engine from saltwater.[15] The 236-foot-long, 35-foot-wide hull sported iron bracings to increase its strength (a contribution of Brunel), and cabins for 128 passengers. The extravagant, high-ceiling grand saloon provided a last, luxurious Brunel touch. By far the largest steamship yet built, Great Western would have towered over most other ships in the London docks where she was built.[16] The competing group around Junius Smith had not been idle. Smith, an American-born merchant who ran his business out of London had dreamed of a steam-powered Atlantic crossing ever since 1832, when while idling on a fifty-four day sail from England to New York; almost twice the usual duration. He formed the British and American Steam Navigation Company, and counted among his backers Macgregor Laird, the Scottish shipbuilder of the Niger River expedition. Their 1800-ton British Queen would boast a 500-horsepower engine, built by the Maudslay company’s Scottish rival, Robert Napier.[17] But Smith’s group fell behind the Brunel consortium (this despite the fact that Brunel still led the engineering on the not-yet-completed Great Western Railway); the Great Western would launch first. In a desperate stunt to be able to boast of making the first Atlantic crossing, British and American launched the channel steamer Sirius on April 4, 1838 from Cork on the west coast of Ireland, laden with fuel and bound for New York. Great Western left Bristol just four days later, with fifty-seven crew (fifteen of them just for stoking coal) to serve a mere seven passengers, each paying the princely sum of 35 guineas for passage.[18] The Steamer Great Western. H.R. Robinson. PAH8859 " data-medium-file="https://cdn.accountdigital.net/FjhWpIE1aFlqt2GYL0Hc-SWJMgY9" data-large-file="https://cdn.accountdigital.net/Flj-dfOyg0Ro7Cz2Hb7HjGMwK6WT?w=739" loading="lazy" width="1024" height="730" src="https://cdn.accountdigital.net/Fu9OpVwe0NXae4jdZHJyPyuWk0m9" alt="" class="wp-image-14505" srcset="https://cdn.accountdigital.net/Fu9OpVwe0NXae4jdZHJyPyuWk0m9 1024w, https://cdn.accountdigital.net/FuVJjI8esdiqejo2CGzqD47Y3gkI 150w, https://cdn.accountdigital.net/FjhWpIE1aFlqt2GYL0Hc-SWJMgY9 300w, https://cdn.accountdigital.net/FnNNJhcavKPniiW0ouC1ntqj7v0o 768w, https://cdn.accountdigital.net/Flj-dfOyg0Ro7Cz2Hb7HjGMwK6WT 1280w" sizes="(max-width: 1024px) 100vw, 1024px">A Lithograph of the Great Western. Despite three short stops to deal with engine problems and a near-mutiny by disgruntled coal stokers working in miserable conditions, Great Western nearly overtook Sirius, arriving in New York just twelve hours behind her. In total the crossing took less than sixteen days—about half the travel time of a fast sailing packet—with coal to spare in the bunkers. The ledger was not all positive: the clank of the engine, the pall of smoke and the ever-present coating of soot and coal dust drained the ocean of some of its romance; as historian Stephen Fox put it, “[t]he sea atmosphere, usually clean and bracing, felt cooked and greasy.” But sixty-six passengers ponied up for the return trip: “Already… ocean travelers had begun to accept the modernist bargain of steam dangers and discomforts in exchange for consistent, unprecedented speed.”[19] In that first year, Great Western puffed alone through Atlantic waters. Itmade four more round trips In 1838, eking out a small profit. The British Queen launched at last in July 1839, and British and American launched an even larger ship, SS President, the following year. Among the British Queen’s first passengers on its maiden voyage to New York was Samuel Cunard, a name that would resonate in ocean travel for a century to come, and an object lesson in the difference between technical and business success. In 1840 his Cunard Line began providing transatlantic service in four Britannia-class paddleships. Imitation Great Westerns (on a slightly smaller scale), they stood out not for their size or technical novelty but for their regularity and uniformity of service. But the most important factor in Cunard’s success was outmaneuvering the Great Western Steam Company in securing a contract with the Admiralty for mail service to Halifax. This provided a steady and reliable revenue stream—starting at 60,000 pounds a year—regardless of economic downturns. Moreover, once the Navy had come to depend on Cunard for speedy mail service it had little choice but to keep upping the payments to keep his finances afloat.[20] Thanks to the savvy of Cunard, steam travel from Britain to America, a fantasy in 1836 (at least according to the likes of Dionysius Lardner), had become steady business four years later. Brunel, however, had no patience for the mere making of money. He wanted to build monuments; creations to stand the test of time, things never seen or done before. So, when, soon after the launching of the Great Western, he began to design his next great steam ship, he decided he would build it with a hull of solid iron.

Read more
The Era of Fragmentation, Part 4: The Anarchists

Between roughly 1975 and 1995, access to computers accelerated much more quickly than access to computer networks. First in the United States, and then in other wealthy countries, computers became commonplace in the homes of the affluent, and nearly ubiquitous in institutions of higher education. But if users of those computers wanted to connect their machines together – to exchange email, download software, or find a community where they could discuss their favorite hobby, they had few options. Home users could connect to services like CompuServe. But, until the introduction of flat monthly fees in the late 1980s, they charged by the hour at rates relatively few could afford. Some university students and faculty could connect to a packet-switched computer network, but many more could not. By 1981, only about 280 computers had access to ARPANET. CSNET and BITNET would eventually connect hundreds more, but they only got started in the early 1980s. At that time the U.S. counted more than 3,000 institutions of higher education, virtually all of which would have had multiple computers, ranging from large mainframes to small workstations. Both communities, home hobbyists and those academics who were excluded from the big networks, turned to the same technological solution to connect to one another. They hacked the plain-old telephone system, the Bell network, into a kind of telegraph, carrying digital messages instead of voices, and relaying messages from computer to computer across the country and the world. These were among the earliest peer-to-peer computer networks. Unlike CompuServe and other such centralized systems, onto which home computers latched to drink down information like so many nursing calves, information spread through these networks like ripples on a pond, starting from anywhere and ending up everywhere. Yet they still became rife with disputes over politics and power. In the late 1990s, as the Internet erupted into popular view, many claimed that it would flatten social and economic relations. By enabling anyone to connect with anyone, the middle men and bureaucrats who had dominated our lives would find themselves cut out of the action. A new era of direct democracy and open markets would dawn, where everyone had an equal voice and equal access. Such prophets might have hesitated had they reflected on what happened on Usenet and Fidonet in the 1980s. Be its technical substructure ever so flat, every computer network is embedded within a community of human users. And human societies, no matter how one kneads and stretches, always seem to keep their lumps. Usenet In the summer of 1979, Tom Truscott was living the dream life for a young computer nerd. A grad student in computer science at Duke University with an interest in computer chess, he landed an internship at Bell Labs’ New Jersey headquarters, where he got to rub elbows with the creators of Unix, the latest craze to sweep the world of academic computing. The origins of Unix, like those of the Internet itself, lay in the shadow of American telecommunications policy. Ken Thompson and Dennis Ritchie of Bell Labs decided in the late 1960s to build a leaner, much pared-down version of the massive MIT Multics system to which they had contributed as software developers. The new operating system quickly proved a hit within the labs, popular for its combination of low overhead (allowing it to run on even inexpensive machines) and high flexibility. However, AT&T could do little to profit from their success. A 1956 agreement with the Justice Department required AT&T to license non-telephone technologies to all comers at a reasonable rate, and to stay out of all business sectors other than supplying common carrier communications. So AT&T began to license Unix to universities for use in academic settings on very generous terms. These early licensees, who were granted access to the source code, began building and selling their own Unix variants, most notably the Berkeley Software Distribution (BSD) Unix created at the the University of California’s flagship campus. The new operating system quickly swept academia. Unlike other popular operating systems, such as the DEC TENEX / TOPS-20, it could run on hardware from a variety of vendors, many of them offering very low-cost machines. And Berkeley distributed the software for only a nominal fee, in addition to the modest licensing fee from AT&T.1 Truscott felt that he sat at the root of all things, therefore, when he got to spend the summer as Ken Thompson’s intern, playing a few morning rounds of volleyball before starting work at midday, sharing a pizza dinner with his idols, and working late into the night slinging code on Unix and the C programming language. He did not want to give up the connection to that world when his internship ended, and so as soon as he returned to Duke in the fall, he figured out how to connect the computer science department’s Unix-equipped PDP 11/70 back to the mothership in Murray Hill, using a program written by one of his erstwhile colleagues, Mike Lesk. It was called uucp – Unix to Unix copy – and it was one of a suite of “uu” programs new to the just-released Unix Version 7, which allowed one Unix system to connect to another over a modem. Specifically, uucp allowed one to copy files back and forth between the two connected computers, which allowed Truscott to exchange email with Thompson and Ritchie. Undated photo of Tom Truscott It was Truscott’s fellow grad student, Jim Ellis, who had installed the new Version 7 on the Duke computer, but even as the new upgrade gave with one hand, it took away with the other. The news program that was distributed by the Unix users’ group, USENIX, which would broadcast news items to all users of a given Unix computer system, no longer worked on the new operating ssytem. Truscott and Ellis decided they would replace it with their own 7-compatible news program, with more advanced features, and return their improved software back to the community for a little bit of prestige. At this same time, Truscott was also using uucp to connect with a Unix machine at the University of North Carolina ten miles to the southwest in Chapel Hill, and talking to a grad student there named Steve Bellovin.2 Bellovin had also started building his own news program, which notably included the concept of topic-based newsgroups, to which one could subscribe, rather than only having a single broadcast channel for all news. Bellovin, Truscot and Ellis decided to combine their efforts and build a networked news system with newsgroups, that would use uucp to share news between sites. They intended to distributed provide Unix-related news for USENIX members, so they called their system Usenet.  Duke would serve as the central clearinghouse at first, using its auto-dialer and uucp to connect to each other site on the network at regular intervals, in order to pick up it local news updates and deposit updates from its peers. Bellovin wrote the initial code, but it used shell scripts that operated very slowly, so Stephen Daniel, another Duke grad student, rewrote the program in C. Daniel’s version became know as A News. Ellis promoted the program at the January 1980 Usenix conference in Boulder, Colorado, and gave away all eighty copies of the software that he had brought with him. By the next Usenix conference that summer, the organizers had added A News to the general software package that they distributed to all attendees. The creators described the system, cheekily, as a “poor man’s ARPANET.” Though one may not be accustomed to thinking of Duke as underprivileged, it did not have the clout in the world of computer science necessary at the time to get a connection to that premiere American computer network. But access to Usenet required no one’s permission, only a Unix system, a modem, and the ability to pay the phone bills for regular news transfers, requirements that virtually any institution of higher education could meet by the early 1980s. Private companies also joined up with Usenet, and helped to facilitate the spread of the network. Digital Equipment Corporation (DEC) agreed to act as an intermediary between Duke and UC Berkeley, footing the long-distance telephone bills for inter-coastal data transfer. This allowed Berkeley to become a second, west-coast hub for Usenet, connecting up UC San Francisco, UC San Diego, and others, including Sytek, an early LAN business. The connection to Berkeley, an ARPANET site, also enabled cross-talk between ARPANET and Usenet (after a second re-write by Mark Horton and Matt Glickman to create B News). ARPANET sites began picking up Usenet content and vice versa, though ARPA rules technically forbid interconnection with other networks. The network grew rapidly, from fifteen sites carrying ten posts a day in in 1980, to 600 sites and 120 posts in 1983, and 5000 sites and 1000 posts in 1987.3 Its creators had originally conceived Usenet as a way to connect the Unix user community and discuss Unix developments, and to that end they created two groups, net.general and net.v7bugs (the latter for discussing problems with the latest version of Unix). However they left the system entirely open for expansion. Anyone was free to create a new group under “net”, and users very quickly added non-technical topics such as net.jokes. Just as one was free to send whatever one chose, recipients could also ignore whatever groups they chose, e.g. a system could join Usenet and request data only for net.v7bugs, ignoring the rest of the content. Quite unlike the carefully planned ARPANET, Usenet self-organized, and grew in an anarchic way overseen by no central authority. Yet out of this superficially democratic medium a hierarchical order quickly emerged, with a certain subset of highly-connected, high-traffic sites recognized as the “backbone” of the system. This process developed fairly naturally. Because each transfer of data from one site to the next incurred a communications delay, each new site joining the network had a strong incentive to link itself to an already highly-connected node, to minimize the number of hops required for their messages to span the network. The backbone sites were a mix of educational and corporate sites, usually led by one headstrong individual willing to take on the thankless tasks involved in administering all the activity crossing their computer. Gary Murakami at Bell Labs’ Indian Hills lab in Illinois, for example, or Gene Spafford at Georgia Tech. The most visible exercise of the power held by this backbone administrators came in 1987, when they pushed through a re-organization of the newsgroup namespace into seven top-level buckets. comp, for example, for computer-related topics, and rec for recreational topics. Sub-topics continued to be organized hierarchically underneath the “big seven”, such as comp.lang.c for discussion of the C programming language, and rec.games.board for conversations about boardgaming. A group of anti-authoritarians, who saw this change as a coup by the “Backbone Cabal,” created their own splinter hierarchy rooted at alt, with its own parallel backbone. It included topics that were considered out-of-bounds for the big seven, such as sex and recreational drugs (e.g. alt.sex.pictures)4, as well as quirky groups that simply rubbed the backbone admins the wrong way (e.g. alt.gourmand; the admins preferred the anodyne rec.food.recipes). Despite these controversies, by the late 1980s, Usenet had become the place for the computer cognoscenti to find trans-national communities of like-minded individuals. In 1991 alone, Tim Berners-Lee announced the creation of the World Wide Web on alt.hypertext; Linus Torvalds solicited comp.os.minix for feedback on his new pet project, Linux; and Peter Adkison, due to a post on rec.games.design about his game company, connected with Richard Garfield, a collaboration that would lead to the creation of the card game Magic: The Gathering. FidoNet But even as the poor man’s ARPANET spread across the globe, microcomputer hobbyists,  with far fewer resources than even the smallest of colleges, were still largely cut off from the experience of electronic communication. Unix, a low-cost, bare-bones option by the standards of academic computing, was out of reach for hobbyists with 8-bit microprocessors, running an operating system called CP/M that barely did anything beyond managing the disk drive. But they soon began their own shoe-string experiments in low-cost peer-to-peer networking, starting with something called bulletin boards. Given the simplicity of the idea and the number of computer hobbyists in the wild at the time, it seems probable that the computer bulletin board was invented independently several times. But tradition gives precedence to the creation of Ward Christensen and Randy Suess of Chicago, launched during the great blizzard of 1978.  Christensen and Suess were both computer hobbyists in their early thirties, and members of their local computer club. For some time they had been considering creating a server where computer club members could upload news articles, using the modem file transfer software that Christensen had written for CP/M – the hobbyist equivalent of uucp. The blizzard, which kept them housebound for several days, gave them the impetus to actually get started on the project, with Christensen focusing on the software and Suess on the hardware. In particular, Suess devised a circuit that automatically rebooted the computer into the BBS software each time it detected an incoming caller, a necessary hack to ensure the system was in a good state to receive the call, given the flaky state of hobby hardware and software at the time. They called their invention CBBS, for Computerized Bulletin Board System, but most later system operators (or sysops) would drop the C and call their service a BBS.5 They published the details of what they had built in a popular hobby magazine, Byte, and a slew of imitators soon followed. Another new piece of technology, the Hayes Modem, fertilized this flourishing BBS scene. Dennis Hayes was another computer hobbyist, who wanted to use a modem with his new machine, but the existing commercial offerings fell into two categories: devices aimed at business customers that were too expensive for hobbyists, and acoustically-coupled modems. To connect a call on an acoustically-coupled modem you first had to dial or answer the phone manually, and then place the handset onto the modem so they could communicate. There was no way to automatically start a call or answer one. So, in 1977, Hayes designed, built, and sold his own 300 bit-per-second modem that would slot into the interior of a hobby computer. Suess and Christensen used one of these early-model Hayes modems in their CBBS. Hayes’ real breakthrough product, though, was the 1981 Smartmodem, which sat in its own external housing with its own built-in microprocessor and connected to the computer through its serial port. It sold for $299, well within reach of hobbyists who habitually spent a few thousand dollars on their home computer setups. The 300 baud Hayes Smartmodem One of those hobbyists, Tom Jennings, set in motion what became the Usenet of BBSes. A programmer for Phoenix Software in San Francisco, Jennings decided in late 1983 to write his own BBS software, not for CP/M, but for the latest and greatest microcomputer operating system, Microsoft DOS. He called it Fido, after a computer he had used at his work, so-named for its mongrel-like assortment of parts. John Madill, a salesman at ComputerLand in Baltimore, learned about Fido and called all the way across the country to ask Jennings for help in tweaking Fido to make it run on his DEC Rainbow 100 microcomputer. The two began a cross-country collaboration on the software, joined by another Rainbow enthusiast, Ben Baker of St. Louis. All three racked up substantial long-distance phone bills as they logged into one another’s machines for late-night BBS chats. With all of this cross-BBS chatter, an idea began to buzz forward from the back of Jennings’ mind, that he could create a network of BBSes that would exchange messages late at night, when long-distance rates were low. The idea was not new. Many hobbyists had imagined that BBSes could route messages in this way, all the way back to Christensen and Suess’ Byte article. But they generally had assumed that for the scheme to work, you would need very high BBS density and complex routing rules, to ensure that all the calls remained local, and thus toll-free, even when relaying messages from coast to coast. But Jennings did some back-of-the-envelope math and realized that, given increasing modem speeds (now up to 1200 bits per second for hobby modems) and falling long-distance costs, no such cleverness was necessary. Even with substantial message traffic, you could pass text between systems for a few bucks per night. Tom Jennings in 2002 (still from the BBS documentary) So he added a new program to live alongside Fido. Between one to two o’clock in the morning, Fido would shut down and FidoNet would start up. It would check Fido’s outgoing messages against a file called the node list. Each outgoing message had a node number, and each entry in the list represented a network node – a Fido BBS – and provided the phone number for that node number. If there were pending outgoing messages, FidoNet would dial up each of the corresponding BBSes on the node list and transfer the messages over to the FidoNet program waiting on the other side. Suddenly Madill, Jennings and Baker could collaborate easily and cheaply, though at the cost of higher latency – they wouldn’t receive any messages sent during the day until the late night transfer began. Formerly, hobbyists rarely connected with others outside their immediate area, where they could make toll-free calls to their local BBS. But if that BBS connected into FidoNet, users could suddenly exchange email with others all across the country. And so the scheme proved immensely popular, and the number of FidoNet nodes grew rapidly, to over 200 within a year. Jennings’ personal curation of the node list thus became less and less manageable. So during the first “FidoCon” in St. Louis, Jennings and Baker met in the living room of Ken Kaplan, another DEC Rainbow fan who would take an increasingly important role in the leadership of FidoNet. They came up with a new design that divided North America into nets, each consisting of many nodes. Within each net, one administrative node would take on the responsibility of  managing its local nodelist, accepting inbound traffic to its net, and forwarding those messages to the correct local node. Above the layer of nets were zones, which covered an entire continent. The system still maintained one global nodelist with the phone numbers of every FidoNet computer in the world, so any node could theoretically directly dial any other to deliver messages. This new architecture allowed the system to continue to grow, reaching almost 1,000 nodes by 1986 and just over 5,000 by 1989. Each of these nodes (itself a BBS) likely averaged 100 or so active users. The two most popular applications were the basic email service that Jennings had built into FidoNet and Echomail, created by Jeff Rush, a BBS sysop in Dallas. Functionally equivalent to Usenet newsgroups, Echomail allowed the thousands of users of FidoNet to carry out public discussions on a variety of topics. Echoes, the term for individual groups, had mononyms rather than the hierarchical names of Usenet, ranging from AD&D to MILHISTORY to ZYMURGY (home beer brewing). Jennings, philosophically speaking, inclined to anarchy, and wanted to build a neutral platform governed only by its technical standards6: I said to the users that they could do anything they wanted …I’ve maintained that attitude for eight years now, and I have never had problems running BBSs. It’s the fascist control freaks who have the troubles. I think if you make it clear that the callers are doing the policing–even to put it in those terms disgusts me–if the callers are determining the content, they can provide the feedback to the assholes. Just as with Usenet, however, the hierarchical structure of FidoNet made it possible for some sysops to exert more power than others, and rumors swirled of a powerful cabal (this time headquartered in St. Louis), seeking to take control of the system from the people. In particular, many feared that Kaplan or others around him would try to take the system commercial and start charging access to FidoNet. Of particular suspicion was the International FidoNet Association (IFNA), a non-profit that Kaplan had founded to help defray some of the costs of administering the system (especially the long-distance telephone charges). In 1989 those suspicions seemed to be realized when a group of IFNA leaders pushed through a referendum to make every FidoNet sysop a member of IFNA and turn it into the official governing body of the net, responsible for its rules and regulations. The measure failed, and IFNA was dissolved instead. Of course, the absence of any symbolic governing body did not eliminate the realities of power; the regional nodelist administrators instead enacted policy on an ad hoc basis. The Shadow of Internet From the late 1980s onward, FidoNet and Usenet gradually fell under the looming shadow of the Internet. By the second half of that same decade, they had been fully assimilated by it. Usenet became entangled within the webs of the Internet through the creation of NNTP – Network News Transfer Protocol – in early 1986. Conceived by a pair of University of California students (one in San Diego and the other in Berkeley), NNTP allowed TCP/IP network hosts on the Internet to create Usenet-compatible news servers. Within a few years, the majority of Usenet traffic flowed across such links, rather than uucp connections over the plain-old telephone network. The independent uucp network gradually fell into disuse, and Usenet became just another application atop TCP/IP transport. The immense flexibility of the Internet’s layered architecture made it easy to absorb a single-application network in this way.  Although by the early 1990s, several dozen gateways between FidoNet and Internet existed, allowing the two networks to exchange messages, FidoNet was not a single application, and so its traffic did not migrate onto the internet in the same way as Usenet. Instead, as people outside academia began looking for Internet access for the first time in the second half of the 1990s, BBSes gradually found themselves either absorbed into the Internet or reduced to irrelevance. Commercial BBSes generally fell into the first category. These mini-CompuServes offered BBS access for a monthly fee to thousands of users, and had multiple modems for accepting simultaneous incoming connections. As commercial access to the Internet became possible, these businesses connected their BBS to the nearest Internet network and began offering access to their customers as part of a subscription package. With more and more sites and services becoming available on the burgeoning World Wide Web, fewer and fewer users signed on to the BBS per se, and thus these commercial BBSes gradually became pure internet service providers, or ISPs. Most of the small-time hobbyist BBSes, on the other hand, became ghost towns, as users wanting to tap into the Internet flocked to their local ISPs, as well as to larger, nationally known outfits such as America Online. That’s all very well, but how did the Internet become so dominant in the first place? How did an obscure academic system, spreading gradually across elite universities for years while systems like Minitel, CompuServe and Usenet were bringing millions of users online, suddenly explode into the foreground, enveloping like kudzu all that had come before it? How did the Internet become the force that brought the era of fragmentation to an end? [Previous] [Next] Further Reading / Watching Ronda Hauben and Michael Hauben, Netizens: On the History and Impact of Usenet and the Internet, (online 1994, print 1997) Howard Rheingold, The Virtual Community (1993) Peter H. Salus, Casting the Net (1995) Jason Scott, BBS: The Documentary (2005)

Read more
ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more