Britain’s Steam Empire

The British empire of the nineteenth century dominated the world’s oceans and much of its landmass: Canada, southern and northeastern Africa, the Indian subcontinent, and Australia. At its world-straddling Victorian peak, this political and economic machine ran on the power of coal and steam; the same can be said of all the other major powers of the time, from also-ran empires such as France and the Netherlands, to the rising states of Germany and the United States.

Two technologies bound the far-flung British empire together, steamships and the telegraph; and the latter, which might seem to represent a new, independent technical paradigm based on electricity, depended on the former. Only steamships, who could adjust course and speed at will regardless of prevailing winds, could effectively lay underwater cable.[1]

A 1901 map of the cable network of the Eastern Telegraph Company (which later became Cable & Wireless) shows the pervasive commercial and imperial power of Victorian London.

Not just an instrument of imperial power, the steamer also created new imperial appetites: the British empire and others would seize new territories just for the sake of provisioning their steamships and protecting the routes they plied.

Within this world system under British hegemony, access to coal became a central economic and strategic factor. As the economist Stanley Jevons wrote in his 1865 treatise on The Coal Question:

Day by day it becomes more obvious that the Coal we happily possess in excellent quality and abundance is the Mainspring of Modem Material Civilization. …Coal, in truth, stands not beside but entirely above all other commodities. It is the material energy of the country — the universal aid — the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back into the laborious poverty of early times.[2]

Steamboats and the Projection of Power

As the states of Atlantic Europe—Portugal and Spain, then later the Netherlands, England, and France—began to explore and conquer along the coasts of Africa and Asia in the sixteenth and seventeenth centuries, their cannon-armed ships proved one of their major advantages. Though the states of India and Indonesia had access to their own gunpowder weaponry, they did not have the ship-building technology to build stable firing platforms for large cannon broadsides. The mobile fortresses that the Europeans brought with them allowed them to dominate the sea lanes and coasts, wresting control of the Indian Ocean trade from the local powers.[3]

What they could not do, however, was project power inland from the sea. The galleons and later heavily armed ships of the Europeans could not sail upriver. In this era, Europeans rarely could dominate inland states. When it did happen, as in India, it typically required years or decades of warfare and politicking, with the aid of local alliances. The steamboat, however, opened the rivers of Africa and Asia to lightning attacks or shows of force: directly by armed gunboats themselves, or indirectly through armies moving upriver supplied by steam-powered craft.

We already know, of course, how Laird used steamboats in his expedition up the Niger in 1832. Although his intent was purely commercial, not belligerent, he had demonstrated the that interior of Africa could be navigated with steam. When combined with quinine to protect European settlers from malaria, the steamboat would help open a new wave of imperial claims on African territory.

But even before Laird’s expedition, the British empire had begun to experiment with the capabilities of riverine steamboats. British imperial policy in Asia still operated under the corporate auspices of the East India Company (EIC), not under the British government, and in 1824 the EIC went to war with Burma over control of territories between the Burmese Empire and British India, in what is now Bangladesh. It so happened that the company had several steamers on hand, built in the dockyards of Calcutta (now Kolkata), and the local commanders put them to work in war service (much as Andrew Jackson had done with Shreve’s Enterprise in 1814).[4]

Most impressive was Diana, which penetrated 400 miles up the Irrawaddy to the Burmese imperial capital at Amarapura: “she towed sailing ships into position, transported troops, reconnoitered advance positions, and bombarded Burmese fortifications with her swivel guns and Congreve rockets.”[5] She also captured the Burmese warships, who could not outrun her and whose small cannons on fixed mounts could not effectively put fire on her either.

A depiction of an attack on Burmese fortifications by the British fleet. The steamship Diana is at right.

In the Burmese war, however, steamships had served as the supporting cast. In the First Opium War, the steamship Nemesis took a star turn. The East India Company traditionally made its money by bringing the goods of the East—mainly tea, spices, and cotton cloth—back west to Europe. In the nineteenth century, however, the directors had found an even more profitable way to extract money from their holdings in the subcontinent: by growing poppies and trading the extracted drug even further east, to the opium dens of China. The Qing state, understandably, grew to resent this trade that immiserated its citizens, and so in 1839 the emperor promulgated a ban on the drug.

The iron-hulled Nemesis was built and dispatched to China by the EIC with the express purpose of carrying war up China’s rivers. Shemounted a powerful main battery of twin swivel-mount 32-pounders and numerous smaller weapons, and with a shallow draft was able to navigate not just up the Pearl River, but into the shallow waterways around Canton (Guangzhou), destroying fortifications and ships and wreaking general havoc. Later Nemesis and several other steamers, towing other battleships, brought British naval power 150 miles up the Yangtze to its junction with the Grand Canal. The threat to this vital economic lifeline brought the Chinese government to terms.[6]

Nemesis and several British boats destroying a fleet of Chinese junks in 1841.

Steamboats continued to serve in imperial wars throughout the nineteenth century. A steam-powered naval force dispatched from Hong Kong helped to break the Indian Rebellion of 1857. Steamers supplied Herbert Kitchener’s 1898 expedition up the Nile to the Sudan, with the dual purpose of avenging the death of Charles “Chinese” Gordon fourteen years earlier and of preventing the French from securing a foothold on the Nile. His steamboat force consisted of a mix of naval gunboats and a civilian ship requisitioned from the ubiquitous Cook & Son tourism and logistics firm.[7]

Kitchener could only dispatch such an expedition because of the British power base in Cairo (from whence it ruled Egypt through a puppet khedive), and that power base existed for one primary reason: to protect the Suez Canal.

The Geography of Steam: Suez

In 1798, Napoleon’s army of conquest, revolution, and Enlightenment arrived in Egypt with the aim of controlling the Eastern half of the Mediterranean and cutting off Britain’s overland link to India. There they uncovered the remnants of a canal linking the Nile Delta to the Red Sea. Constructed in antiquity and restored several times after, it had fallen into disuse sometime in the medieval period. It’s impossible to know for certain, but when operable, this canal had probably served as a regional waterway connecting the Egyptian heartland around the Nile with the lands around the head of the Red Sea. By the eighteenth century, in an age of global commerce and global empires, however, a nautical connection between the Mediterranean and Red Sea had more far-reaching implications.[8]

A reconstruction of the possible location of the ancient Nile-Suez canal. [Picture by Annie Brocolie / CC BY-SA 2.5]

Napoleon intended to restore the canal, but before any work could commence, France’s forces in Egypt withdrew in the face of a sustained Anglo-Ottoman assault. Though British commercial and imperial interests presented a far stronger case for a canal than any benefits France might have hoped to get from it, the British government fretted about upsetting the balance of power in the Middle East and disrupting their textile industry’s access to the Egyptian cotton cloth. They contented themselves instead with a cumbrous overland route to link the Red Sea and the Mediterranean. Meanwhile, a series of French engineers and diplomats, culminating in Ferdinand de Lesseps, pressed for the concession required to build a sea-to-sea Suez Canal, and construction under French engineers finally began in 1861. The route formally opened in November, 1869 in a grand celebration that attracted most of the crowned heads of continental Europe.[9]

It was just as well that the project was delayed: it allowed for the substitution, in 1865, of steam dredges for conscripted labor at the work site. Of the hundred million cubic yards of earth excavated for the canal, four-fifths were dug out with iron and steam rather than muscle, generating 10,000 horsepower at the cost of £20,000 of coal per month.[10] Without mechanical aid, the project would have dragged on well into the 1870s, if it were completed at all. Moreover, Napoleon’s precocious belief in the project notwithstanding, the canal’s ultimate fiscal health depended of the existence of ocean-going steamships as well. By sail, depending on the direction of travel and the season, the powerful trade winds on the southern route could make it the faster option, or at least the more efficient one given the tolls on the canal.[11] But for a steamship, the benefits of cutting off thousands of miles from the journey were three-fold: it didn’t just save time, it also saved fuel, which in turn freed more space for cargo. Given the tradeoffs, as historian Max Fletcher wrote, “[a]lmost without exception, the Suez Canal was an all-steamer route.”[12]

The modern Suez Canal, with the Mediterranean Sea on the left and the Red Sea on the right. [Picture by Pierre Markuse / CC BY 2.0]

Ironically, the British, too conservative in their instincts to back the canal project, would nonetheless derive far more obvious benefit from it than the French government or investors, who struggled to make their money back in the early years of the canal. The new canal became the lifeline to the empire in India and beyond.

This new channel for the transit of people and goods was soon complemented by an even more rapid channel for the transmission of intelligence. The first great achievement of the global telegraph age was the transatlantic cable laid in 1866 by Brunel’s Great Eastern, whose cavernous bulk allowed it to lay the entire line from Ireland to Newfoundland in a single piece in 1866.[13] This particular connection served mainly commercial interests, but the Great Eastern went on to participate in the laying of a cable from Suez to Aden and on to Bombay in 1870, providing relatively instantaneous electric communication (modulo a few intermediate hops) from London to its most precious imperial possession.[14]

The importance of the Suez for quick communications with India in turn led to further aggressive British expansion in 1882: the bombarding of Alexandria and the de facto conquest of an Egypt still nominally loyal to the Sultan in Istanbul. This was not the only such instance. Steam power opened up new ways for empires to exert their might, but also pulled them to new places sought out only because steam power itself had made them important.

The Geography of Steam: Coaling Stations

In that vein, coaling stations—coastal and island stations for restocking ships with fuel—became an essential component of global empire. In 1839, the British seized the port of Aden (on the gulf of the same name) from the Sultan of Lahej for exactly that purpose, to serve as a coaling station for the steamers operating between the Red Sea and India.[15]

Other, pre-existing waystations waxed or waned in importance along with the shift from the geography of sail to that of steam. St. Helena in the Atlantic, governed by the East India Company since the 1650s, could only be of use to ships returning from Asia in the age of sail, due to the prevailing trade winds that pushed outbound ships towards South America. The advent of steam made an expansion of St. Helena’s role possible, but then the opening of Suez diverted traffic away from the South Atlantic altogether. The opening of the Panama Canal similarly eclipsed the Falkland Islands’ position as the gateway to the Pacific.[16]

In the case of shore-bound stations such as Aden, the need to protect the station itself sometimes led to new imperial commitments in its hinterlands, pulling empire onward in the service of steam. Aden’s importance only multiplied with the opening of the Suez Canal, which now made it part of the seven-thousand-mile relay system between Great Britain and India. Aggressive moves by the Ottoman Empire seemed to imperil this lifeline, and so the existence of the station became the justification for Britain to create a protectorate (a collection of vassal states, in effect) over 100,000 square miles of the Arabian Peninsula.[17]

Britain created the 100,000-square-mile Aden protectorate to safeguard its steamship route to India.

Coaling stations acquired local coal where it was available—from North America, South Africa, Bengal, Borneo, or Australia—where it was not, it had to be brought in, ironically, by sailing ships. But although one lump of coal may seem as good as another, it was not, in fact, a single fungible commodity. Each seam varied in the ratio and types of chemical impurities it contained, which affected how the coal burned.

Above all, the Royal Navy was hungry for the highest quality coal. By the 1850s, the British Admiralty determined that a hard coal from the deeper layers of certain coal measures in South Wales exceeded all others in the qualities required for naval operations: a maximum of energy and a minimum of residues that would dirty engines and black smoke that would give away the position of their ships over the horizon. In 1871 the Navy launched its first all-steam oceangoing warship, the HMS Devastation, which needed, at full bore, 150 tons of this top-notch coal per day, without which it would become “the verist hulk in the navy.”

The coal mines lining a series of north-south valleys along the Bristol Channel, which had previously supplied the local iron industry, thus became part of a global supply chain. The Admiralty demanded access to imported Welsh coal across the globe, in every port where the Navy refueled, even where local supplies could be found.[18]

The dark green area indicates the coal seams of South Wales, where the best  steam coal in the world could be found.

The British supply network far exceeded that of any other nation in its breadth and reliability, which gave their navy a global operational capacity that no other fleet could match. When the Russians sent their Baltic fleet to attack Japan in 1905, the British refused it coaling service and pressured the French to do likewise, leaving the ships reliant on sub-par German supplies. It suffered repeated delays and quality shortfalls in its coal before meeting its grim fate in Tsushima Strait. Aleksey Novikov-Priboi, a sailor on one of the Russian ships, later wrote that “coal had developed into an idol, to which we sacrificed strength, health, and comfort. We thought only in terms of coal, which had become a sort of black veil hiding all else, as if the business of the squadron had not been to fight, but simply to get to Japan.”[19]

Even the rising naval power of the United States, stoked by the dreams of Alfred Mahan, could scarcely operate outside its home waters without British sufferance. The proud Great White Fleet of the United States that circumnavigated the globe to show the flag found itself repeatedly humbled by the failures of its supply network, reliant on British colliers or left begging for low-quality local supplies.[20]

But if British steam power on the oceans still outshone that of the U.S. even beyond the turn of the twentieth century, on land it was another matter, as we shall next time.

ARPANET, Part 1: The Inception

By the mid-1960s, the first time-sharing systems had already recapitulated the early history of the first telephone exchanges. Entrepreneurs built those exchanges as a means to allow subscribers to summon services such as a taxi, a doctor, or the fire brigade. But those subscribers soon found their local exchange just as useful for communicating and socializing with each other1. Likewise time-sharing systems, initially created to allow their users to “summon” computer power, had become communal switchboards with built-in messaging services2. In the decade to follow, computers would follow the next stage in the history of the telephone – the interconnection of exchanges to form regional and long-distance networks. The Ur-Network The first attempt to actually connect multiple computers into a larger whole was the ur-project of interactive computing itself, the SAGE air defense system. Because each of the twenty-three SAGE direction centers covered a particular geographical area, some mechanism was needed for handing off radar tracks from one center to another when incoming aircraft crossed a boundary between those areas. The SAGE designers dubbed this problem “cross-telling,” and they solved it by building data links on dedicated AT&T phone lines among all the neighboring direction centers. Ronald Enticknap, part of a small Royal Air Force delegation to SAGE, oversaw the design and implementation of this subsystem. Unfortunately, I have found no detailed description of the cross-telling function, but evidently each direction center computer determined when a track was crossing into another sector and sent its record over the phone line to that sector’s computer, where it could be picked up by an operator monitoring a terminal there3. The SAGE system’s need to translate digital data into an analog signal over the phone line (and then back again at the receiving station) occasioned AT&T to develop the Bell 101 “dataset”, which could deliver a modest 110 bits per second. This kind of device was later called a “modem”, for its ability to modulate the analog telephone signal using an outgoing series of digital data, and demodulate the bits from the incoming wave form. SAGE  thus laid some important technical groundwork for later computer networks. The first computer network of lasting significance, however, is one whose name is well known even today: ARPANET. Unlike SAGE, it connected a diverse set of time-shared and batch-processing hardware each with its own custom software, and was intended to be open-ended in scope and function, fulfilling whatever purposes users might desire of it. ARPA’s section for computer research – the Information Processing Techniques Office (IPTO) –  funded the project under the direction of Robert Taylor, but the idea for such a network sprang from the imagination of that office’s first director, J.C.R. Licklider. The Vision As we learned earlier, Licklider, known to his colleagues as ‘Lick,’ was a psychologist by training. But he became entranced with interactive computing while working on radar systems at Lincoln Laboratory in the late 1950s. This passion led him to fund some of the first experiments in time-shared computing when he became the director of the newly-formed IPTO, a position he took in 1962. By that time, he was already looking ahead to the possibility of linking isolated interactive computers together into a larger superstructure. In his 1960 paper on “man-computer symbiosis”, he wrote that [i]t seems reasonable to envision …a ‘thinking center’ that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions suggested earlier in this paper. The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. Just as the TX-2 had kindled Licklider’s excitement over interactive computing, it may have been the SAGE computer network that prompted Licklider to imagine that a variety of interactive computing centers could be connected together to provide a kind of telephone network for intellectual services. Whatever its exact origin, Licklider began disseminating this vision among the community of researchers that he had created at IPTO, most famously in his memo of April 23, 1963, directed to the “Members and Affiliates of the Intergalactic Computer Network,” that is to say the various researchers receiving IPTO funding for time-sharing and other computing projects. The memo is rambling and shambolic, evidently dictated on the fly with little to no editorial revision. Determining exactly what Licklider intended it to say about computer networks therefore requires some speculative inference. But several significant clues stand out. First, Licklider revealed he sees the “various activities” funded by IPTO as in fact belonging to a single “overall enterprise.”  He follows this pronouncement by discussing the need to allocate money and projects to maximize the advantage accruing to that enterprise, as network of researchers as a whole, given that, “to make progress, each of the active researchers needs a software base and a hardware facility more complex and more extensive than he, himself, can create in reasonable time.” To achieve this global efficiency might, Licklider conceded, requires some individual concessions and sacrifices by certain parties. Then Licklider began to explicitly discuss computer (rather than social) networks. He wrote of the need for some sort of network control language (what would later be called a protocol) and his desire to eventually see an IPTO computer network consisting of “..at least four large computers, perhaps six or eight small computers, and a great assortment of disc files and magnetic tape units–not to mention the remote consoles and teletype stations…” Finally, he spent several pages laying out a concrete example of how a future interaction with such a computer network might play out. Licklider imagines a situation where he is running an analysis on some experimental data. “The trouble is,” he writes, “I do not have a good grid-plotting program. …Is there a suitable grid-plotting program anywhere in the system? Using prevailing network doctrine, I interrogate first the local facility, and then other centers. Let us suppose that I am working at SDC, and that I find a program that looks suitable on a disc file in Berkeley.” He asks the network to execute this program for him, assuming that, “[w]ith a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data.” Taken together, these fragments of thought appear to reveal a larger scheme in Licklider’s mind: first, to parcel out particular specialties and areas of expertise among IPTO-funded researchers, and then to build beneath that social community a physical network of IPTO computers. This physical instantiation of IPTO’s “overall enterprise” would allow researchers to share in and benefit from the specialized hardware and software resources at each site. Thus IPTO would avoid wasteful duplication while amplifying the power of each funding dollar by allowing every researcher to access the full spectrum of computing capabilities across all of IPTO’s projects. This idea, of resource-sharing among the research community via a communications network, sowed the seeds within IPTO that led, several years later, to the creation of ARPANET. Despite its military provenance, originating as it did in the halls of the Pentagon, ARPANET thus had no real military justification. It is sometimes said that the network was designed as a war-hardened communications network, capable of surviving a first-strike nuclear attack. There is a loose connection, as we’ll see later, between ARPANET and an earlier project with that aim, and ARPA’s leaders occasionally trotted out the “hardened systems” idea to justify their network’s existence before Congress or the Secretary of Defense. But in truth, IPTO built ARPANET purely for its own internal purposes, to support its community of researchers – most of whom themselves lacked any direct defense justification for their activities. Meanwhile, by the time of his famous memo Licklider had already begun planning the germ of his intergalactic network, to be led by Len Kleinrock at UCLA. The Precursors Kleinrock, the son of working class immigrants from Eastern Europe, grew up in Manhattan in the shadow of the George Washington Bridge. He worked his way through school, taking evening sessions at City College to study electrical engineering. When he heard about a fellowship opportunity for graduate study at MIT, capped by a semester of full time work at Lincoln Lab, he jumped at the opportunity. Though built to serve the needs of SAGE, Lincoln had since diversified into many other research projects, often tangentially related to air defense, at best. Among them was the Barnstable Study, a concept floated by the Air Force to create an orbital belt of metallic strips (similar to chaff) to use as reflectors for a global communication system4. Kleinrock had fallen under the spell of Claude Shannon at MIT, and so decided to focus his graduate work on the theory of communication networks. The Barnstable Study provided Kleinrock with his first opportunity to apply the tools of information and queuing theory to a data network, and he extended that analysis into a full dissertation on “communications nets,” combining his mathematical analysis with empirical data gathered by running simulations on Lincoln’s TX-2 computers. Among Kleinrock’s close colleagues at Lincoln, sharing time with him in front of the TX-2, were Larry Roberts and Ivan Sutherland, whom we will meet again shortly. By 1963, Kleinrock had accepted a position at UCLA, and Licklider saw an opportunity – here he had an expert in data networking at a site with three local computer centers: the main computation center, the health sciences computer center, and the Western Data Processing Center (a cooperative of thirty institutions with shared access to an IBM computer). Moreover, six of the Western Data Processing Center institutions had remote connections to the computer by modem, and the IPTO-sponsored System Development Corporation (SDC) computer resided just a few miles away in Santa Monica. IPTO issued a contract to UCLA to interconnect these four centers, as a first experiment in computer networking. Later, according to the plan, a connection with Berkeley would tackle the problems inherent in a longer-range data connection. Despite the promising situation, the project foundered and the network was never built. The directors of the different UCLA centers didn’t trust one other, nor fully believe in the project, and they refused to cede control over their computing resources to one another’s users. IPTO had little leverage to influence the situation, since none of the UCLA computing centers were funded directly by ARPA5. IPTO’s second try at networking proved more successful, perhaps because it was significantly more limited in scope – a mere experimental trial rather than a pilot plant. In 1965, a psychologist and disciple of Licklider’s named Tom Marill left Lincoln Lab to try to profit from the excitement around interactive computing by starting his own time-sharing business. Lacking much in the way of actual paying customers, however, he began casting about for other sources of income, and thus proposed that IPTO fund him to carry out a study of computer networking. IPTO’s new director, Ivan Sutherland, decided to bring a larger and more reputable partner on board  as ballast, and so sub-contracted the work to Marill’s company via Lincoln Lab. Heading things from the Lincoln side would be another of Kleinrock’s old office-mates, Lawrence (Larry) Roberts. Roberts had cut his teeth on the Lincoln-built TX-0 as an undergrad at MIT. He spent hours each day entranced before the glowing console screen, eventually constructing a program to (badly) recognize written characters using neural nets. Like Kleinrock he ended up working at Lincoln for his graduate studies, solving computer graphics and computer vision problems, such as edge-detection and three-dimensional rendering, on the larger and more powerful TX-2. Up until late 1964, Roberts had remained entirely focused on his imaging research. Then he came across Lick. In November of that year, he attended an Air Force-sponsored conference on the future of computing at the Homestead hot springs resort in western Virginia. There he talked late into the night with his fellow conference participants, and for the first time heard Lick expound on his idea for an Intergalactic Network. Roberts began to feel a tickle at the back of his brain – he had done great work on computer graphics, but it was in effect trapped on the one-of-a-kind TX-2. No one else could use his software, even if he had way to provide it to them, because no one else had equivalent hardware to run it on. The only way to extend the influence of his work was to report on it in academic papers in the hopes that others would and could replicate it elsewhere. Licklider was right, he decided, a network was exactly the next step needed to accelerate computing research. And so Roberts found himself working with Marill, trying to connect the Lincoln TX-2 with a cross-country link to the SDC computer in Santa Monica, California. In an experimental design that could have been ripped straight from Licklider’s “Intergalactic Network” memo, they planned to have the TX-2 pause in the middle of a computation, use an automatic dialer to remotely call the SDC Q-32, invoke a matrix multiply program on that computer, and then continue the original computation with the answer. Setting aside the basic sensibility of using dearly-bought cutting-edge technology to span a continent in order to use a basic math routine, the whole process was painfully slow due to the use of the dial telephone network. To make a telephone call required setting up a dedicated circuit between the caller and recipient, usually routed through several different switching centers. As of 1965, virtually all of these were electro-mechanical6. Magnets shifted metal bars from one place to another in order to complete each step of the circuit. This whole process took several seconds, during which time the TX-2 could only sit idle and wait. Moreover the lines, though perfectly suited for voice conversation, were noisy with respect to individual bits and supported very low bandwidth (a couple hundred bits per second). A truly effective intergalactic, interactive, network, would require a different approach.[^others] The Marill-Roberts experiment had not shown long-distance networking to be practical or useful, merely theoretically possible. But that was enough. The Decision In the middle of 1966, Robert Taylor took over the directorship of IPTO, succeeding Ivan Sutherland as the third to hold that title. A disciple of Licklider and a fellow-psychologist, he came to IPTO by way of a position administering computer research for NASA. Nearly as soon as he arrived, Taylor seems to have decided that the time had come to make the intergalactic network a reality, and it was Taylor who launched the project that produced ARPANET. ARPA money was still flowing freely, so Taylor had no trouble securing the extra funding from his boss, Charles Herzfeld. Nonetheless, the decision carried significant risk of failure. Other than the very limited 1965 cross-country connection, no one had ever attempted anything like ARPANET. One could point to other early experiments in computer networking. For example, Princeton and Carnegie-Mellon set up a network of time-shared computers in the late 1960s in conjunction with IBM.7 The main distinction between these and the ARPA efforts was their uniformity – they used exactly the same computer system hardware and software at each site. ARPANET, on the other hand, would be bound to deal with diversity. By the mid-1960s, IPTO was funding well over a dozen sites, each with its own computer, and each of those computers had a different hardware design and operating software. The ability to share software was rare even among different models from a single manufacturer – only the brand-new IBM System/360 product line had attempted this feat. This diversity of systems was a risk that added a great deal of technical complexity to the network design, but also an opportunity for Licklider-style resource sharing. The University of Illinois, for example, was in the midst of construction on the massive, ARPA-funded ILLIAC IV supercomputer. It seemed improbable to Taylor that the local users at Urbana-Champaign could fully utilize this huge machine. Even sites with systems of more modest scale – the TX-2 at Lincoln and the Sigma-7 at UCLA, for example, could not normally share software due to their basic incompatibilities. The ability to overcome this limitation by directly accessing the software at one site from another was attractive. In the paper describing their networking experiment, Marill and Roberts had suggested that this kind of resource sharing would produce something akin to Ricardian comparative advantage among computing sites: The establishment of a network may lead to a certain amount of specialization among the cooperating installations. If a given installation, X, by reason of special software or hardware, is particularly adept at matrix inversion, for example, one may expect that users at other installations in the network will exploit this capability by inverting their matrices at X in preference to doing so on their home computers.[^ricardo] Taylor had one further motivation for proceeding with a resource-sharing network. Purchasing a new computer for each new IPTO site, with all the capabilities that might be required by the researchers at that site, had proven expensive, and as one site after another was added to IPTO’s portfolio, the budget for each was becoming thinly stretched. By putting all the IPTO-funded systems onto a single network, it might be possible to supply new grantees with more limited computers, or perhaps even none at all. They could draw whatever computer power they needed from a remote site with excess capacity, the network as whole acting as a communal reservoir of hardware and software. Having launched the project and secured its funding, Taylor’s last notable contribution to ARPANET was to select someone to actually design the system and see it through to completion. Roberts was the obvious choice. His engineering bona fides were impeccable, he was already a respected member of the IPTO research community, and he was one of of a handful of people with hands-on experience designing and building a long-distance computer network. So in the fall of 1966, Taylor called Roberts to ask him to come down from Massachusetts to work for ARPA in Washington. But Roberts proved difficult to entice. Many of the IPTO principal investigators cast a skeptical eye on the reign of Robert Taylor, whom they viewed as something of a lightweight. Yes, Licklider had been a psychologist too, with no real engineering chops, but at least he had a doctorate, and a certain credibility earned as one of the founding fathers of interactive computing. Taylor was an unknown with a mere master’s degree. How could he oversee the complex technical work going on within the IPTO community? Roberts counted himself among these skeptics. But a combination of stick and carrot did their work. On the one hand Taylor exerted a certain pressure on Roberts’ boss at Lincoln, reminding him that a substantial portion of his lab’s funding now came from ARPA, and that it would behoove him to encourage Roberts to see the value in the opportunity on offer. On the other hand, Taylor offered Roberts the newly-minted title of “Chief Scientist”, a position that would report over Taylor’s head directly to a Deputy Director of ARPA, and mark Roberts as Taylor’s successor to the directorship. On these terms Roberts agreed to take on the ARPANET project.8 The time had come to turn the vision of resource-sharing into reality. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)  

Read more
High-Pressure, Part I: The Western Steamboat

The next act of the steamboat lay in the west, on the waters of the Mississippi basin. The settler population of this vast region—Mark Twain wrote that “the area of its drainage-basin is as great as the combined areas of England, Wales, Scotland, Ireland, France, Spain, Portugal, Germany, Austria, Italy, and Turkey”—was already growing rapidly in the early 1800s, and inexpensive transport to and from its interior represented a tremendous economic opportunity.[1] Robert Livingston scored another of his political coups in 1811, when he secured monopoly rights for operating steamboats in the New Orleans Territory. (It did not hurt his cause that he himself had negotiated the Louisiana Purchase, nor that his brother Edward was New Orleans’ most prominent lawyer.) The Fulton-Livingston partnership built a workshop in Pittsburgh to build steamboats for the Mississippi trade. Pittsburgh’s central position at the confluence of Monangahela and Allegheny made it a key commercial hub in the trans-Appalachian interior and a major boat-building center. Manufactures made there could be distributed up and down the rivers far more easily than those coming over the mountains from the coast, and so factories for making cloth, hats, nails, and other goods began to sprout up there as well.[2] The confluence of river-based commerce, boat-building and workshop know-how made Pittsburgh the natural wellspring for western steamboating. Figure 1: The Fulton-Livingston New Orleans. Note the shape of the hull, which resembles that of a typical ocean-going boat. From Pittsburgh, The Fulton-Livingston boats could ride downstream to New Orleans without touching the ocean. The New Orleans, the first boat launched by the partners, went into regular service from New Orleans to Natchez (about 175 miles to the north) in 1812, but their designs—upscaled versions of their Hudson River boats—fared poorly in the shallow, turbulent waters of the Mississippi. They also suffered sheer bad luck: the New Orleans grounded fatally in 1814, the aptly-named Vesuvius burnt to the waterline in 1816 and had to be rebuilt. The conquest of the Mississippi by steam power would fall to other men, and to a new technology: high-pressure steam. Strong Steam A typical Boulton & Watt condensing engine was designed to operate with steam below the pressure of the atmosphere (about fifteen pounds per square inch (psi)). But the possibility of creating much higher pressures by heating steam well above the boiling point was known for well over a century. The use of so-called “strong steam” dated back at least to Denis Papin’s steam digester from the 1670s. It even had been used to do work, in pumping engines based on Thomas Savery’s design from the early 1700s, which used steam pressure to push water up a pipe. But engine-builders did not use it widely in piston engines until well into the nineteenth century. Part of the reason was the suppressive influence of the great James Watt. Watt knew that expanding high-pressure steam could drive a piston, and laid out plans for high-pressure engines as early as 1769, in a letter to a friend: I intend in many cases to employ the expansive force of steam to press on the piston, or whatever is used instead of one, in the same manner as the weight of the atmosphere is now employed in common fire-engines. In some cases I intend to use both the condenser and this force of steam, so that the powers of these engines will as much exceed those pressed only by the air, as the expansive power of the steam is greater than the weight of the atmosphere. In other cases, when plenty of cold water cannot be had, I intend to work the engines by the force of steam only, and to discharge it into the air by proper outlets after it has done its office.[3] But he continued to rely on the vacuum created by his condenser, and never built an engine worked “by the force of steam only.” He went out of his way to ensure that no one else did either, deprecating the use of strong steam at every opportunity. There was one obvious reason why: high-pressure steam was dangerous. The problem was not the working machinery of the engine but the boiler, which was apt to explode, spewing shrapnel and superheated steam that could kill anyone nearby. Papin had added a safety valve to his digester for exactly this reason. Savery steam pumps were also notorious for their explosive tendencies. Some have imputed a baser motive for Watt’s intransigence: a desire to protect his own business from high-pressure competition. In truth, though, high-pressure boilers did remain dangerous, and would kill many people throughout the nineteenth century. Unfortunately, the best material for building a strong boiler was the most difficult from which to actually construct one. By the beginning of the nineteenth century copper, lead, wrought iron, and cast iron had all been tried as boiler materials, in various shapes and combinations. Copper and lead were soft, cast iron was hard, but brittle. Wrought iron clearly stood out as the toughest and most resilient option, but it could only be made in ingots or bars, which the prospective boilermaker would then have to flatten and form into small plates, many of which would have to be joined to make a complete boiler. Advances in two fields in the decades around 1800 resolved the difficulties of wrought iron. The first was metallurgical. In the late eighteenth century, Henry Cort invented the “puddling” process of melting and stirring iron to oxidize out the carbon, producing larger quantities of wrought iron that could be rolled out into plates of up to about five feet long and a foot wide.[4] These larger plates still had to be riveted together, a tedious and error-prone process, that produced leaky joints. Everything from rope fibers to oatmeal was tried as a caulking material. To make reliable, steam-tight joints required advances in machine tooling. This was a cutting-edge field at the time (pun intended). For example, for most of history craftsmen cut or filed screws by hand. The resulting lack of consistency meant that many of the uses of screws that we take for granted were unknown: one could not cut 100 nuts and 100 bolts, for example, and then expect to thread any pair of them together. Only in the last quarter of the eighteenth centuries did inventors craft sufficiently precise screw-cutting lathes to make it possible to repeatedly produce screws with the same length and pitch. Careful use of tooling similarly made it possible to bore holes of consistent sizes in wrought iron plates, and then manufacture consistently-sized rivets to fit into them, without the need to hand-fit rivets to holes.[5] One could name a few outstanding early contributors to the improvement of machine tooling in the first decades of the nineteenth century Arthur Woolf in Cornwall, or John Hall at the U.S. Harper’s Ferry Armory. But the steady development of improvements in boilers and other steam engine parts also involved the collective action of thousands of handcraft workers. Accustomed to building liquor stills, clocks, or scientific instruments, they gradually developed the techniques and rules of thumb needed for precision metalworking for large machines.[6] These changes did not impress Watt, and he stood by his anti-high-pressure position until his death in 1819. Two men would lead the way in rebelling against his strictures. The first appeared in the United States, far from Watt’s zone of influence, and paved the way for the conquest of the Western waters. Oliver Evans Oliver Evans was born in Delaware in 1755. He first honed his mechanical skills as an apprentice wheelwright. Around 1783, he began constructing a flour mill with his brothers on Red Clay Creek in northern Delaware. Hezekiah Niles, a boy of six, lived nearby. Niles would become the editor of the most famous magazine in America, from which post he later had occasion to recount that “[m]y earliest recollections pointed him out to me as a person, in the language of the day, that ‘would never be worth any thing, because he was always spending his time on some contrivance or another…’”[7] Two great “contrivances” dominated Evans’ adult life. The challenges of the mill work at Red Clay Creek led to his first great idea:  an automated flour mill. He eliminated most of the human labor from the mill by linking together the grain-processing steps with a series of water-powered machines (the most famous and delightfully named being the “hopper boy”). Though fascinating in its own right, for the purposes of our story the automated mill only matters in so far as it generated the wealth which allowed him to invest in his second great idea: an engine driven by high-pressure steam. Figure 2: Evans’ automated flour mill. In 1795, Evans published an account of his automatic mill entitled The Young Mill-Wright and Miller’s Guide. Something of his personality can be gleaned from the title of his 1805 sequel on the steam engine: The Abortion of the Young Steam Engineer’s Guide. A bill to extend the patent on his automatic flour mill failed to pass Congress in 1805, and so he published his Abortion as a dramatic swoon, a loud declaration that, in response this rebuff, he would be taking his ball and going home: His [i.e., Evans’] plans have thus proved abortive, all his fair prospects are blasted, and he must suppress a strong propensity for making new and useful inventions and improvements; although, as he believes, they might soon have been worth the labour of one hundred thousand men.[8] Of course, despite these dour mutterings, he failed entirely to suppress his “strong propensity,” in fact he was in the very midst of launching new steam engine ventures at this time. Like so many other early steam inventors, Evans’ interest in steam began with a dream of a self-propelled carriage. The first tangible evidence that we have of his interest in steam power comes from patents he filed in 1787 which included mention of a “steam-carriage, so constructed to move by the power of steam and the pressure of the atmosphere, for the purpose of conveying burdens without the aid of animal force.” The mention of “the pressure of the atmosphere” is interesting—he may have still been thinking of a low-pressure Watt-style engine at this point.[9] By 1802, however, Evans had a true high-pressure engine of about five horsepower operating at his workshop at Ninth and Market in Philadelphia. He had established himself in that city in 1792, the better to promote his milling inventions and millwright services. He attracted crowds to his shop with his demonstration of the engine at work: driving a screw mill to pulverize plaster, or cutting slabs of marble with a saw. Bands of iron held reenforcing wooden slats against the outside of the boiler, like the rim of a cartwheel or the hoops of a barrel. This curious hallmark testified to Evans’ background as a millwright and wheelwright [10] The boiler, of course, had to be as strong as possible to contain the superheated steam, and Evans’ later designs made improvements in this area. Rather than the “wagon” boiler favored by Watt (shaped like a Conestoga wagon or a stereotypical construction worker’s lunchbox), he used a cylinder. A spherical boiler being infeasible to make or use, this shape distributed the force of the steam pressure as evenly as practicable over the surface. In fact, Evans’ boiler consisted of two cylinders in an elongated donut shape, because rather than placing the furnace below the boiler, he placed it inside, to maximize the surface area of water exposed to the hot air. By the time of the Steam Engineer’s Guide, he no longer used copper braced with wood, he now recommended the “best” (i.e. wrought) iron “rolled in large sheets and strongly riveted together. …As cast iron is liable to crack with the heat, it is not to be trusted immediately in contact with the fire.”[11] Figure 3: Evan’s 1812 design, which he called the Columbian Engine to honor the young United States on the outbreak of the War of 1812. Note the flue carrying heat through the center of the boiler, the riveted wrought iron plates of the boiler, and the dainty proportions of the cylinder, in comparison to that of a Newcomen or Watt engine. Pictured in the corner is the Orukter Amphibolos. Evans was convinced of the superiority of his high-pressure design because of a rule of thumb that he had gleaned from the article “Steam” in the American edition of the Encylopedia Britannica: “…whatever the present temperature, an increase of 30 degrees doubles the elasticity and the bulk of water vapor.”[12] From this Evans concluded that heating steam to twice the boiling point (from 210 degrees to 420), would increase its elastic force by 128 times (since a 210 degree increase in temperature would make seven doublings). This massive increase in power would require only twice the fuel (to double the heat of the steam). None of this was correct, but it would not be the first or last time that faulty science would produce useful technology.[13] Nonetheless, the high-pressure engine did have very real advantages. Because the power generated by an engine was proportional to the area of the piston times the pressure exerted on that piston, for any given horsepower, a high-pressure engine could be made much smaller than its low-pressure equivalent. A high-pressure engine also did not require a condenser: it could vent the spent steam directly into the atmosphere. These factors made Evans’ engines smaller, lighter, and simpler and less expensive to build. A non-condensing high-pressure engine of twenty-four horsepower weighed half a ton and had a cylinder nine-inches across. A traditional Boulton & Watt style engine of the same power had a cylinder three times as wide and weighed four times as much overall.[14]   Such advantages in size and weight would count doubly for an engine used in a vehicle, i.e. an engine that had to haul itself around. In 1804 Evans sold an engine that was intended to drive a New Orleans steamboat, but it ended up in a sawmill instead. This event could serve as a metaphor for his relationship to steam transportation. He declared in his Steam Engineer’s Guide that: The navigation of the river Mississippi, by steam engines, on the principles here laid down, has for many years been a favourite object with the author and among the fondest wishes of his heart. He has used many endeavours to produce a conviction of its practicability, and never had a doubt of the sufficiency of the power.[15]   But steam navigation never got much more than his fondest wishes. Unlike a Fitch or a Rumsey, the desire to make a steamboat did not dominate his dreams and waking hours alike. By 1805, he was a well-established man of middle years. If he had ever possessed the Tookish spirit required for riverboat adventures, he had since lost it. He had already given up on the idea of a steam carriage, after failing to sell the Lancaster Turnpike Company on the idea in 1801. His most grandiosely named project, the Orukter Amphibolos, may briefly have run on wheels en route to serve as a steam dredge in the Philadelphia harbor. If it functioned at all, though, it was by no means a practical vehicle, and it had no sequel. Evans’ attention had shifted to industrial power, where the clearest financial opportunity lay—an opportunity that could be seized without leaving Philadelphia. Despite Evans’ calculations (erroneous, as we have said), a non-condensing high-pressure engine was somewhat less fuel-efficient than an equivalent Watt engine, not more. But because of its size and simplicity, it could be built at half the cost, and transported more cheaply, too. In time, therefore, the Evans-style engine became very popular as a mill or factory engine in the capital- and transportation-poor (but fuel-rich) trans-Appalachian United States.[16] In 1806, Evans began construction on his “Mars Works” in Philadelphia, to serve market for engines and other equipment. Evans engines sprouted up at sawmills, flour mills, paper factories, and other industrial enterprises across the West. Then, in 1811, he organized the Pittsburgh Steam Engine Company, operated by his twenty-three-year-old son George, to reduce transportation costs for engines to be erected west of the Alleghenies.[17] It was around that nexus of Pittsburgh that Evans’ inventions would find the people with the passion to put them to work, at last, on the rivers. The Rise of the Western Steamboat The mature Mississippi paddle steamer differed from its Eastern antecedents in two main respects. First, in its overall shape and layout: a roughly rectangular hull with a shallow draft, layer cake decks, and machinery above the water, not under it. This design was better adapted to an environment where snags and shallows presented a much greater hazard than waves and high winds. Second, in the use of a high-pressure engine, or engines, with a cylinder mounted horizontally along the deck. Many historical accounts attribute both of these essential developments to a keelboatman named Henry Miller Shreve. Economic historian Louis Hunter effectively demolished this legend in the 1940s, but more recent writers (for example Shreve’s 1984 biographer, Edith McCall), have continued to perpetuate it. In fact, no one can say with certainty where most of these features came from because no one bothered to document their introduction. As Hunter wrote: From the appearance of the first crude steam vessels on the western waters to the emergence of the fully evolved river steamboat a generation later, we know astonishingly little of the actual course of technological events and we can follow what took place only in its broad outlines. The development of the western steamboat proceeded largely outside the framework of the patent system and in a haze of anonymity.[18] Some documents came to light in the 1990s, however, that have burned away some of the “haze,” with respect to the introduction of high-pressure engines.[19] the papers of Daniel French reveal that the key events happened in a now-obscure place called Brownsville (originally known as Redstone), about forty miles up the Monongahela from that vital center of western commerce, Pittsburgh. Brownsville was the point where anyone heading west on the main trail over the Alleghenies—which later became part of the National Road—would first reach navigable waters in the Mississippi basin. Henry Shreve grew up not far from this spot. Born in 1785 to a father who had served as a Colonel in the Revolutionary War, he grew up on a farm near Brownsville on land leased from Washington: one of the general’s many western land-development schemes.[20] Henry fell in love with the river life, and in by his early twenties had established himself with his own keelboat operating out of Pittsburgh. He made his early fortune off the fur trade boom in St. Louis, which took off after Lewis and Clark returned with reports of widespread beaver activity on the Missouri River.[21] In the fall of 1812, a newcomer named Daniel French arrived in Shreve’s neighborhood—a newcomer who already had experience building steam watercraft, powered by engines based on the designs of Oliver Evans. French was born in Connecticut 1770, and started planning to build steamboats in his early 20s, perhaps inspired by the work of Samuel Morey, who operated upstream of him on the Connecticut River. But, discouraged from his plans by the local authorities, French turned his inventive energies elsewhere for a time. He met and worked with Evans in Washington, D.C., to lobby Congress to extend the length of patent grants, but did not return to steamboats until Fulton’s 1807 triumph re-energized him. At this point he adopted Evans’ high-pressure engine idea, but added his own innovation, an oscillating cylinder that pivoted on trunions as the engine worked. This allowed the piston shaft to be attached to the stern wheel with a simple (and light) crank, without any flywheel or gearing. The small size of the high-pressure cylinder made it feasible to put the cylinder in motion. In 1810, a steam ferry he designed, for a route from Jersey City to Manhattan, successfully crossed and recrossed the North (Hudson) River at about six miles per hour. Nonetheless, Fulton, who still held a New York state monopoly, got the contract from the ferry operators.[22] French moved to Philadelphia and tried again, constructing the steam ferry Rebecca to carry passengers across the Delaware. She evidently did not produce great profits, because a frustrated French moved west again in the fall of 1812, to establish a steam-engine-building business at Brownsville.[23] His experience with building high-pressure steamboats—simple, relatively low-cost, and powerful—had arrived at the place that would benefit most from those advantages, a place, moreover, where the Fulton-Livingston interests held no legal monopoly. News about the lucrative profits of the New Orleans on the Natchez run had begun to trickle back up the rivers. This was sufficient to convince the Brownsville notables—Shreve among them—to put up $11,000 to form the Monongahela and Ohio Steam Boat Company in 1813, with French as their engineer. French had their first boat, Enterprise, ready by the spring of 1814. Her exact characteristics are not documented, but based on the fragmentary evidence, she seems in effect to have been a motorized keelboat: 60-80’ long, about 30 tons, and equipped with a twenty-horsepower engine. The power train matched that of French’s 1810 steam ferry, trunions and all.[24] The Enterprise spent the summer trading along the Ohio between Pittsburgh and Louisville. Then, in December, she headed south with a load of supplies to aid in the defense of New Orleans. For this important voyage into waters mostly unknown to the Brownsville circle, they called on the experienced keelboatman, Henry Shreve. Andrew Jackson had declared martial law, and kept Shreve and the Enterprise on military dutyin New Orleans. With Jackson’s aid, Shreve dodged the legal snares laid for him by the Fulton-Livingston group to protect their New Orleans monopoly. Then in May, after the armistice, he brough the Enterprise on a 2,000-mile ascent back to Brownsville, the first steamboat ever to make such a journey. Shreve became an instant celebrity. He had contributed to a stunning defeat for the British at New Orleans, carried out an unprecedent voyage. Moreover, he had confounded the monopolists: their attempt to assert exclusive rights over the commons of the river was deeply unpopular west of the Appalachians. Shreve capitalized on his new-found fame to raise money for his own steamboat company in Wheeling, Virginia. The Ohio at Wheeling ran much deeper than the Monongahela at Brownsvile, and Shreve would put this depth to use: he had ambitions to put a French engine into a far larger boat than the Enterprise. Spurring French to scale up his design was probably Shreve’s largest contribution to the evolution of the western steamboat. French dared not try to repeat his oscillating cylinder trick on the larger cylinder that would drive Shreve’s 100-horsepower, 400-ton two-decker. Instead, he fixed the cylinder horizontally to the hull, and then attached the piston rod to a connecting rod, or “pitman,” that drove the crankshaft of the stern paddle wheel. He thus transferred the oscillating motion from the piston to the pitman, while keeping the overall design simple and relatively low cost.[25] Shreve called his steamer Washington, after his father’s (and his own) hero. Her maiden voyage in 1817, however, was far from heroic. Evans would have assured French that the high-pressure engine carried little risk: as he wrote in the Steam Engineer’s Guide, “we know how to construct [boilers] with a proportionate strength, to enable us to work with perfect safety.”[26] Yet on her first trip down the Ohio, with twenty-one passengers aboard, the Washington’s boiler exploded, killing seven passengers and three crew. The blast threw Shreve himself into the river, but he did not suffer serious harm.[27] Ironically, the only steamboat built by the Evans family, the Constitution (née Oliver Evans) suffered a similar fate in the same year, exploding and killing eleven on board. Despite Evans’ confidence in their safety, boiler accidents continued to bedevil steamboats for decades. Though the total numbers killed was not enormous—about 1500 dead across all Western rivers up to 1848—each event provided an exceptionally grisly spectacle. Consider this lurid account of the explosion of the Constitution: One man had been completely submerged in the boiling liquid which inundated the cabin, and in his removal to the deck, the skin had separated from the entire surface of his body. The unfortunate wretch was literally boiled alive, yet although his flesh parted from his bones, and his agonies were most intense, he survived and retained all his consciousness for several hours. Another passenger was found lying aft of the wheel with an arm and a leg blown off, and as no surgical aid could be rendered him, death from loss of blood soon ended his sufferings. Miss C. Butler, of Massachusetts, was so badly scalded, that, after lingering in unspeakable agony for three hours, death came to her relief.[28] In response to continued public outcry for an end to such horrors, Congress eventually stepped in, passing acts to improve steamboat safety in 1838 and 1852. Meanwhile, Shreve was not deterred by the setback. The Washington itself did not suffer grievous damage, so he corrected a fault in the safety valves and tried again. Passengers were understandably reluctant for an encore performance, but after the Washington made national news in 1817 with a freight passage from New Orleans to just twenty-five days, the public quickly forgot and forgave. A few days later, a judge in New Orleans refused to consider a suit by the Fulton-Livingston interests against Shreve, effectively nullifying their monopoly.[29] Now all comers knew that steamboats could ply the Mississippi successfully, and without risk of any legal action. The age of the western steamboat opened in earnest. By 1820, sixty-nine steamboats could be found on western rivers, and 187 a decade after that.[30] Builders took a variety of approaches to powering these boats: low-pressure engines, engines with vertical cylinders, engines with rocking beams or fly wheels to drive the paddles. Not until the 1830s did a dominant pattern take hold, but when it did it, it was that of the Evans/French/Shreve lineage, as found on the Washington: a high-pressure engine with a horizontal cylinder driving the wheel through an oscillating connecting rod.[31] Landscape " data-medium-file="https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L" data-large-file="https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI?w=739" loading="lazy" width="1024" height="840" src="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9" alt="" class="wp-image-14432" srcset="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9 1024w, https://cdn.accountdigital.net/FrglBM_683opK7_ejlnGd3iJ8vBT 150w, https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L 300w, https://cdn.accountdigital.net/FoNGy7IhZD2VDImZqLwjumC3NT84 768w, https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI 1247w" sizes="(max-width: 1024px) 100vw, 1024px">Figure 4: A Tennessee river steamboat from the 1860s. The distinctive features include a flat-bottomed hull with very little freeboard, a superstructure to hold passengers and crew, and twin smokestacks. The western steamboat had achieved this basic form by the 1830s and maintained it into the twentieth century. The Legacy of the Western Steamboat The Western steamboat was a product of environmental factors that favored the adoption of a shallow-drafted boat with a relatively inefficient but simple and powerful engine: fast, shallow rivers; abundant wood for fuel along the shores of those rivers; and the geographic configuration of the United States after the Louisiana Purchase, with a high ridge of mountains separating the coast from a massive navigable inland watershed. But, Escher-like, the steamboat then looped back around to reshape the environment from which it had emerged. Just as steam-powered factories had, steam transport flattened out the cycles of nature, bulldozing the hills and valleys of time and space. Before the Washington’s journey, the shallow grade that distinguished upstream from downstream dominated the life of any traveler or trader in the Mississippi. Now goods and people could move easily upriver, in defiance the dictates of gravity.[32] By the 1840s, steamboats were navigating well inland on other rivers of the West as well: up the Tombigbee, for example, over 200 miles inland to Columbus, Mississippi.[33] What steamboats alone could not do to turn the western waters into turnpike roads, Shreve and others would impose on them through brute force. Steamboats frequently sank or took major damage from snags or “sawyers”: partially submerged tree limbs or trunks that obstructed the water ways. In some places, vast masses of driftwood choked the entire river. Beyond Natchitoches, the Red River was obstructed for miles by an astonishing tangle of such logs known as the Great Raft.[34] Figure 5: A portrait of Shreve of unknown date, likely the 1840s. The scene outside the window reveals one of his snagboats, a frequently used device in nineteenth century portraits of inventors. Not only commerce was at stake in clearing the waterways of such obstructions; steamboats would be vital to any future war in the West. As early as 1814, Andrew Jackson had put Shreve’s Enterprise to good use, ferrying supplies and troops around the Mississippi delta region.[35] With the encouragement of the Monroe administration, therefore, Congress stepped in with a bill in 1824 to fund the Army’s corps of engineers to improve the western rivers. Shreve was named superintendent of this effort, and secured federal funds to build snagboats such as the Heliopolis, twin-hulled behemoths designed to drive a snag between its hulls and then winch it up onto the middle deck and saw it down to size. Heliopolis and its sister ships successfully cleared large stretches of the Ohio and Mississippi.[36] In 1833, Shreve embarked on the last great venture of his life: an assault on the Great Raft itself. It took six years and a flotilla of rafts, keelboats and steamboats to complete the job, including a new snagboat, Eradicator, built specially for the task.[37] The clearing of waterways, technical advancements in steamboat design, and other improvements (such as the establishment of fuel depots, so that time was not wasted stopping to gather wood), combined to drive travel times along the rivers down rapidly. In 1819, the James Ross completed the New Orleans to Louisville passage in sixteen-and-a-half days. In 1824 the President covered the same distance in ten-and-a-half days, and in 1833 the Tuscorora clocked a run of seven days, six hours. These ever-decreasing record times translated directly into ever-decreasing shipping rates. Early steamboats charged upstream rates equivalent to those levied by their keelboat competitors: about five dollars per hundred pounds carried from New Orleans to Louisville. By the early 1830s this had dropped to an average of about sixty cents per 100 pounds, and by the 1840s as low as fifteen cents.[38] By decreasing the cost of river trade, the steamboat cemented the economic preeminence of New Orleans. Cotton, sugar, and other agricultural goods (much of it produced by slave labor) flowed downriver to the port, then out to the wider world; manufactured goods and luxuries like coffee arrived from the ocean trade and were carried upriver; and human traffic, bought and sold at the massive New Orleans slave market, flowed in both directions.[39] In 1820 a steamboat arrived in New Orleans about every other day. By 1840 the city averaged over four arrivals a day; by 1850, nearly eight.[40] The population of the city burgeoned to over 100,000 by 1840, making it the third-largest in the country. Chicago, its big-shouldered days still ahead of it, remained a frontier outpost by comparison, with only 5,000 residents. Figure 6: A Currier & Ives lithograph of the New Orleans levee. This represents a scene from the late nineteenth century, way past the prime of New Orleans’ economic dominance, but still shows a port bustling with steamboats. But both New Orleans and the steamboat soon lost their dominance over the western economy. As Mark Twain wrote: Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature.[41] Several forces connived in the murder of the Mississippi steamboat, but a close cousin lurked among the conspirators: another form of transportation enabled by the harnessing of high-pressure steam. The story of the locomotive takes us back to Britain, and the dawn of the nineteenth century.

Read more
The Era of Fragmentation, Part 4: The Anarchists

Between roughly 1975 and 1995, access to computers accelerated much more quickly than access to computer networks. First in the United States, and then in other wealthy countries, computers became commonplace in the homes of the affluent, and nearly ubiquitous in institutions of higher education. But if users of those computers wanted to connect their machines together – to exchange email, download software, or find a community where they could discuss their favorite hobby, they had few options. Home users could connect to services like CompuServe. But, until the introduction of flat monthly fees in the late 1980s, they charged by the hour at rates relatively few could afford. Some university students and faculty could connect to a packet-switched computer network, but many more could not. By 1981, only about 280 computers had access to ARPANET. CSNET and BITNET would eventually connect hundreds more, but they only got started in the early 1980s. At that time the U.S. counted more than 3,000 institutions of higher education, virtually all of which would have had multiple computers, ranging from large mainframes to small workstations. Both communities, home hobbyists and those academics who were excluded from the big networks, turned to the same technological solution to connect to one another. They hacked the plain-old telephone system, the Bell network, into a kind of telegraph, carrying digital messages instead of voices, and relaying messages from computer to computer across the country and the world. These were among the earliest peer-to-peer computer networks. Unlike CompuServe and other such centralized systems, onto which home computers latched to drink down information like so many nursing calves, information spread through these networks like ripples on a pond, starting from anywhere and ending up everywhere. Yet they still became rife with disputes over politics and power. In the late 1990s, as the Internet erupted into popular view, many claimed that it would flatten social and economic relations. By enabling anyone to connect with anyone, the middle men and bureaucrats who had dominated our lives would find themselves cut out of the action. A new era of direct democracy and open markets would dawn, where everyone had an equal voice and equal access. Such prophets might have hesitated had they reflected on what happened on Usenet and Fidonet in the 1980s. Be its technical substructure ever so flat, every computer network is embedded within a community of human users. And human societies, no matter how one kneads and stretches, always seem to keep their lumps. Usenet In the summer of 1979, Tom Truscott was living the dream life for a young computer nerd. A grad student in computer science at Duke University with an interest in computer chess, he landed an internship at Bell Labs’ New Jersey headquarters, where he got to rub elbows with the creators of Unix, the latest craze to sweep the world of academic computing. The origins of Unix, like those of the Internet itself, lay in the shadow of American telecommunications policy. Ken Thompson and Dennis Ritchie of Bell Labs decided in the late 1960s to build a leaner, much pared-down version of the massive MIT Multics system to which they had contributed as software developers. The new operating system quickly proved a hit within the labs, popular for its combination of low overhead (allowing it to run on even inexpensive machines) and high flexibility. However, AT&T could do little to profit from their success. A 1956 agreement with the Justice Department required AT&T to license non-telephone technologies to all comers at a reasonable rate, and to stay out of all business sectors other than supplying common carrier communications. So AT&T began to license Unix to universities for use in academic settings on very generous terms. These early licensees, who were granted access to the source code, began building and selling their own Unix variants, most notably the Berkeley Software Distribution (BSD) Unix created at the the University of California’s flagship campus. The new operating system quickly swept academia. Unlike other popular operating systems, such as the DEC TENEX / TOPS-20, it could run on hardware from a variety of vendors, many of them offering very low-cost machines. And Berkeley distributed the software for only a nominal fee, in addition to the modest licensing fee from AT&T.1 Truscott felt that he sat at the root of all things, therefore, when he got to spend the summer as Ken Thompson’s intern, playing a few morning rounds of volleyball before starting work at midday, sharing a pizza dinner with his idols, and working late into the night slinging code on Unix and the C programming language. He did not want to give up the connection to that world when his internship ended, and so as soon as he returned to Duke in the fall, he figured out how to connect the computer science department’s Unix-equipped PDP 11/70 back to the mothership in Murray Hill, using a program written by one of his erstwhile colleagues, Mike Lesk. It was called uucp – Unix to Unix copy – and it was one of a suite of “uu” programs new to the just-released Unix Version 7, which allowed one Unix system to connect to another over a modem. Specifically, uucp allowed one to copy files back and forth between the two connected computers, which allowed Truscott to exchange email with Thompson and Ritchie. Undated photo of Tom Truscott It was Truscott’s fellow grad student, Jim Ellis, who had installed the new Version 7 on the Duke computer, but even as the new upgrade gave with one hand, it took away with the other. The news program that was distributed by the Unix users’ group, USENIX, which would broadcast news items to all users of a given Unix computer system, no longer worked on the new operating ssytem. Truscott and Ellis decided they would replace it with their own 7-compatible news program, with more advanced features, and return their improved software back to the community for a little bit of prestige. At this same time, Truscott was also using uucp to connect with a Unix machine at the University of North Carolina ten miles to the southwest in Chapel Hill, and talking to a grad student there named Steve Bellovin.2 Bellovin had also started building his own news program, which notably included the concept of topic-based newsgroups, to which one could subscribe, rather than only having a single broadcast channel for all news. Bellovin, Truscot and Ellis decided to combine their efforts and build a networked news system with newsgroups, that would use uucp to share news between sites. They intended to distributed provide Unix-related news for USENIX members, so they called their system Usenet.  Duke would serve as the central clearinghouse at first, using its auto-dialer and uucp to connect to each other site on the network at regular intervals, in order to pick up it local news updates and deposit updates from its peers. Bellovin wrote the initial code, but it used shell scripts that operated very slowly, so Stephen Daniel, another Duke grad student, rewrote the program in C. Daniel’s version became know as A News. Ellis promoted the program at the January 1980 Usenix conference in Boulder, Colorado, and gave away all eighty copies of the software that he had brought with him. By the next Usenix conference that summer, the organizers had added A News to the general software package that they distributed to all attendees. The creators described the system, cheekily, as a “poor man’s ARPANET.” Though one may not be accustomed to thinking of Duke as underprivileged, it did not have the clout in the world of computer science necessary at the time to get a connection to that premiere American computer network. But access to Usenet required no one’s permission, only a Unix system, a modem, and the ability to pay the phone bills for regular news transfers, requirements that virtually any institution of higher education could meet by the early 1980s. Private companies also joined up with Usenet, and helped to facilitate the spread of the network. Digital Equipment Corporation (DEC) agreed to act as an intermediary between Duke and UC Berkeley, footing the long-distance telephone bills for inter-coastal data transfer. This allowed Berkeley to become a second, west-coast hub for Usenet, connecting up UC San Francisco, UC San Diego, and others, including Sytek, an early LAN business. The connection to Berkeley, an ARPANET site, also enabled cross-talk between ARPANET and Usenet (after a second re-write by Mark Horton and Matt Glickman to create B News). ARPANET sites began picking up Usenet content and vice versa, though ARPA rules technically forbid interconnection with other networks. The network grew rapidly, from fifteen sites carrying ten posts a day in in 1980, to 600 sites and 120 posts in 1983, and 5000 sites and 1000 posts in 1987.3 Its creators had originally conceived Usenet as a way to connect the Unix user community and discuss Unix developments, and to that end they created two groups, net.general and net.v7bugs (the latter for discussing problems with the latest version of Unix). However they left the system entirely open for expansion. Anyone was free to create a new group under “net”, and users very quickly added non-technical topics such as net.jokes. Just as one was free to send whatever one chose, recipients could also ignore whatever groups they chose, e.g. a system could join Usenet and request data only for net.v7bugs, ignoring the rest of the content. Quite unlike the carefully planned ARPANET, Usenet self-organized, and grew in an anarchic way overseen by no central authority. Yet out of this superficially democratic medium a hierarchical order quickly emerged, with a certain subset of highly-connected, high-traffic sites recognized as the “backbone” of the system. This process developed fairly naturally. Because each transfer of data from one site to the next incurred a communications delay, each new site joining the network had a strong incentive to link itself to an already highly-connected node, to minimize the number of hops required for their messages to span the network. The backbone sites were a mix of educational and corporate sites, usually led by one headstrong individual willing to take on the thankless tasks involved in administering all the activity crossing their computer. Gary Murakami at Bell Labs’ Indian Hills lab in Illinois, for example, or Gene Spafford at Georgia Tech. The most visible exercise of the power held by this backbone administrators came in 1987, when they pushed through a re-organization of the newsgroup namespace into seven top-level buckets. comp, for example, for computer-related topics, and rec for recreational topics. Sub-topics continued to be organized hierarchically underneath the “big seven”, such as comp.lang.c for discussion of the C programming language, and rec.games.board for conversations about boardgaming. A group of anti-authoritarians, who saw this change as a coup by the “Backbone Cabal,” created their own splinter hierarchy rooted at alt, with its own parallel backbone. It included topics that were considered out-of-bounds for the big seven, such as sex and recreational drugs (e.g. alt.sex.pictures)4, as well as quirky groups that simply rubbed the backbone admins the wrong way (e.g. alt.gourmand; the admins preferred the anodyne rec.food.recipes). Despite these controversies, by the late 1980s, Usenet had become the place for the computer cognoscenti to find trans-national communities of like-minded individuals. In 1991 alone, Tim Berners-Lee announced the creation of the World Wide Web on alt.hypertext; Linus Torvalds solicited comp.os.minix for feedback on his new pet project, Linux; and Peter Adkison, due to a post on rec.games.design about his game company, connected with Richard Garfield, a collaboration that would lead to the creation of the card game Magic: The Gathering. FidoNet But even as the poor man’s ARPANET spread across the globe, microcomputer hobbyists,  with far fewer resources than even the smallest of colleges, were still largely cut off from the experience of electronic communication. Unix, a low-cost, bare-bones option by the standards of academic computing, was out of reach for hobbyists with 8-bit microprocessors, running an operating system called CP/M that barely did anything beyond managing the disk drive. But they soon began their own shoe-string experiments in low-cost peer-to-peer networking, starting with something called bulletin boards. Given the simplicity of the idea and the number of computer hobbyists in the wild at the time, it seems probable that the computer bulletin board was invented independently several times. But tradition gives precedence to the creation of Ward Christensen and Randy Suess of Chicago, launched during the great blizzard of 1978.  Christensen and Suess were both computer hobbyists in their early thirties, and members of their local computer club. For some time they had been considering creating a server where computer club members could upload news articles, using the modem file transfer software that Christensen had written for CP/M – the hobbyist equivalent of uucp. The blizzard, which kept them housebound for several days, gave them the impetus to actually get started on the project, with Christensen focusing on the software and Suess on the hardware. In particular, Suess devised a circuit that automatically rebooted the computer into the BBS software each time it detected an incoming caller, a necessary hack to ensure the system was in a good state to receive the call, given the flaky state of hobby hardware and software at the time. They called their invention CBBS, for Computerized Bulletin Board System, but most later system operators (or sysops) would drop the C and call their service a BBS.5 They published the details of what they had built in a popular hobby magazine, Byte, and a slew of imitators soon followed. Another new piece of technology, the Hayes Modem, fertilized this flourishing BBS scene. Dennis Hayes was another computer hobbyist, who wanted to use a modem with his new machine, but the existing commercial offerings fell into two categories: devices aimed at business customers that were too expensive for hobbyists, and acoustically-coupled modems. To connect a call on an acoustically-coupled modem you first had to dial or answer the phone manually, and then place the handset onto the modem so they could communicate. There was no way to automatically start a call or answer one. So, in 1977, Hayes designed, built, and sold his own 300 bit-per-second modem that would slot into the interior of a hobby computer. Suess and Christensen used one of these early-model Hayes modems in their CBBS. Hayes’ real breakthrough product, though, was the 1981 Smartmodem, which sat in its own external housing with its own built-in microprocessor and connected to the computer through its serial port. It sold for $299, well within reach of hobbyists who habitually spent a few thousand dollars on their home computer setups. The 300 baud Hayes Smartmodem One of those hobbyists, Tom Jennings, set in motion what became the Usenet of BBSes. A programmer for Phoenix Software in San Francisco, Jennings decided in late 1983 to write his own BBS software, not for CP/M, but for the latest and greatest microcomputer operating system, Microsoft DOS. He called it Fido, after a computer he had used at his work, so-named for its mongrel-like assortment of parts. John Madill, a salesman at ComputerLand in Baltimore, learned about Fido and called all the way across the country to ask Jennings for help in tweaking Fido to make it run on his DEC Rainbow 100 microcomputer. The two began a cross-country collaboration on the software, joined by another Rainbow enthusiast, Ben Baker of St. Louis. All three racked up substantial long-distance phone bills as they logged into one another’s machines for late-night BBS chats. With all of this cross-BBS chatter, an idea began to buzz forward from the back of Jennings’ mind, that he could create a network of BBSes that would exchange messages late at night, when long-distance rates were low. The idea was not new. Many hobbyists had imagined that BBSes could route messages in this way, all the way back to Christensen and Suess’ Byte article. But they generally had assumed that for the scheme to work, you would need very high BBS density and complex routing rules, to ensure that all the calls remained local, and thus toll-free, even when relaying messages from coast to coast. But Jennings did some back-of-the-envelope math and realized that, given increasing modem speeds (now up to 1200 bits per second for hobby modems) and falling long-distance costs, no such cleverness was necessary. Even with substantial message traffic, you could pass text between systems for a few bucks per night. Tom Jennings in 2002 (still from the BBS documentary) So he added a new program to live alongside Fido. Between one to two o’clock in the morning, Fido would shut down and FidoNet would start up. It would check Fido’s outgoing messages against a file called the node list. Each outgoing message had a node number, and each entry in the list represented a network node – a Fido BBS – and provided the phone number for that node number. If there were pending outgoing messages, FidoNet would dial up each of the corresponding BBSes on the node list and transfer the messages over to the FidoNet program waiting on the other side. Suddenly Madill, Jennings and Baker could collaborate easily and cheaply, though at the cost of higher latency – they wouldn’t receive any messages sent during the day until the late night transfer began. Formerly, hobbyists rarely connected with others outside their immediate area, where they could make toll-free calls to their local BBS. But if that BBS connected into FidoNet, users could suddenly exchange email with others all across the country. And so the scheme proved immensely popular, and the number of FidoNet nodes grew rapidly, to over 200 within a year. Jennings’ personal curation of the node list thus became less and less manageable. So during the first “FidoCon” in St. Louis, Jennings and Baker met in the living room of Ken Kaplan, another DEC Rainbow fan who would take an increasingly important role in the leadership of FidoNet. They came up with a new design that divided North America into nets, each consisting of many nodes. Within each net, one administrative node would take on the responsibility of  managing its local nodelist, accepting inbound traffic to its net, and forwarding those messages to the correct local node. Above the layer of nets were zones, which covered an entire continent. The system still maintained one global nodelist with the phone numbers of every FidoNet computer in the world, so any node could theoretically directly dial any other to deliver messages. This new architecture allowed the system to continue to grow, reaching almost 1,000 nodes by 1986 and just over 5,000 by 1989. Each of these nodes (itself a BBS) likely averaged 100 or so active users. The two most popular applications were the basic email service that Jennings had built into FidoNet and Echomail, created by Jeff Rush, a BBS sysop in Dallas. Functionally equivalent to Usenet newsgroups, Echomail allowed the thousands of users of FidoNet to carry out public discussions on a variety of topics. Echoes, the term for individual groups, had mononyms rather than the hierarchical names of Usenet, ranging from AD&D to MILHISTORY to ZYMURGY (home beer brewing). Jennings, philosophically speaking, inclined to anarchy, and wanted to build a neutral platform governed only by its technical standards6: I said to the users that they could do anything they wanted …I’ve maintained that attitude for eight years now, and I have never had problems running BBSs. It’s the fascist control freaks who have the troubles. I think if you make it clear that the callers are doing the policing–even to put it in those terms disgusts me–if the callers are determining the content, they can provide the feedback to the assholes. Just as with Usenet, however, the hierarchical structure of FidoNet made it possible for some sysops to exert more power than others, and rumors swirled of a powerful cabal (this time headquartered in St. Louis), seeking to take control of the system from the people. In particular, many feared that Kaplan or others around him would try to take the system commercial and start charging access to FidoNet. Of particular suspicion was the International FidoNet Association (IFNA), a non-profit that Kaplan had founded to help defray some of the costs of administering the system (especially the long-distance telephone charges). In 1989 those suspicions seemed to be realized when a group of IFNA leaders pushed through a referendum to make every FidoNet sysop a member of IFNA and turn it into the official governing body of the net, responsible for its rules and regulations. The measure failed, and IFNA was dissolved instead. Of course, the absence of any symbolic governing body did not eliminate the realities of power; the regional nodelist administrators instead enacted policy on an ad hoc basis. The Shadow of Internet From the late 1980s onward, FidoNet and Usenet gradually fell under the looming shadow of the Internet. By the second half of that same decade, they had been fully assimilated by it. Usenet became entangled within the webs of the Internet through the creation of NNTP – Network News Transfer Protocol – in early 1986. Conceived by a pair of University of California students (one in San Diego and the other in Berkeley), NNTP allowed TCP/IP network hosts on the Internet to create Usenet-compatible news servers. Within a few years, the majority of Usenet traffic flowed across such links, rather than uucp connections over the plain-old telephone network. The independent uucp network gradually fell into disuse, and Usenet became just another application atop TCP/IP transport. The immense flexibility of the Internet’s layered architecture made it easy to absorb a single-application network in this way.  Although by the early 1990s, several dozen gateways between FidoNet and Internet existed, allowing the two networks to exchange messages, FidoNet was not a single application, and so its traffic did not migrate onto the internet in the same way as Usenet. Instead, as people outside academia began looking for Internet access for the first time in the second half of the 1990s, BBSes gradually found themselves either absorbed into the Internet or reduced to irrelevance. Commercial BBSes generally fell into the first category. These mini-CompuServes offered BBS access for a monthly fee to thousands of users, and had multiple modems for accepting simultaneous incoming connections. As commercial access to the Internet became possible, these businesses connected their BBS to the nearest Internet network and began offering access to their customers as part of a subscription package. With more and more sites and services becoming available on the burgeoning World Wide Web, fewer and fewer users signed on to the BBS per se, and thus these commercial BBSes gradually became pure internet service providers, or ISPs. Most of the small-time hobbyist BBSes, on the other hand, became ghost towns, as users wanting to tap into the Internet flocked to their local ISPs, as well as to larger, nationally known outfits such as America Online. That’s all very well, but how did the Internet become so dominant in the first place? How did an obscure academic system, spreading gradually across elite universities for years while systems like Minitel, CompuServe and Usenet were bringing millions of users online, suddenly explode into the foreground, enveloping like kudzu all that had come before it? How did the Internet become the force that brought the era of fragmentation to an end? [Previous] [Next] Further Reading / Watching Ronda Hauben and Michael Hauben, Netizens: On the History and Impact of Usenet and the Internet, (online 1994, print 1997) Howard Rheingold, The Virtual Community (1993) Peter H. Salus, Casting the Net (1995) Jason Scott, BBS: The Documentary (2005)

Read more
Coda: Steam’s Last Stand

In the year 1900, automobile sales in the United States were divided almost evenly among three types of vehicles: automakers sold about 1,000 cars powered by internal combustion engines, but over 1,600 powered by steam engines, and almost as many by batteries and electric motors. Throughout all of living memory (at least until the very recent rise of electric vehicles), the car and the combustion engine have gone hand in hand, inseparable. Yet, in 1900, this type claimed the smallest share.For historians of technology, this is the most tantalizing fact in the history of the automobile, perhaps the most tantalizing fact in the history of the industrial age. It suggests a multiverse of possibility, a garden of forking, ghostly might-have-beens. It suggests that, perhaps, had this unstable equilibrium tipped in a different direction, many of the negative externalities of the automobile age—smog, the acceleration of global warming, suburban sprawl—might have been averted. It invites the question, why did combustion win? Many books and articles, by both amateur and professional historians, have been written to attempt to answer this question.However, since the electric car, interesting as its history certainly is, has little to tell us about the age of steam, we will consider here a narrower question—why did steam lose? The steam car was an inflection point where steam power, for so long an engine driving technological progress forward, instead yielded the right-of-way to a brash newcomer. Steam began to look like relic of the past, reduced to watching from the shoulder as the future rushed by. For two centuries, steam strode confidently into one new domain after another: mines, factories, steamboats, railroads, steamships, electricity. Why did it falter at the steam car, after such a promising start?The Emergence of the Steam CarThough Germany had given birth to experimental automobiles in the 1880s, the motor car first took off as successful industry in France. Even Benz, the one German maker to see any success in the early 1890s, sold the majority of its cars and motor-tricycles to French buyers. This was in large part due to the excellent quality of French cross-country roads – though mostly gravel rather than asphalt, they were financed by taxes and overseen by civil engineers, and well above the typical European or American standard of the time. These roads…made it easier for businessmen [in France] to envisage a substantial market for cars… They inspired early producers to publicize their cars by intercity demonstrations and races. And they made cars more practical for residents of rural areas and small towns.[1] The first successful motor car business arose in Paris, in the early 1890s. Émile Levassor and René Panhard (both graduates of the École centrale des arts et manufactures, an engineering institute in Paris), met as managers at a machine shop that made woodworking and metal-working tools. They became the leading partners of the firm and took it into auto making after becoming licensors for the Daimler engine.The 1894 Panhard & Levassor Phaeton already shows the beginning of the shift from horseless carriages with an engine under the seats to the modern car layout with a forward engine compartment. [Jörgens.mi / CC BY-SA 3.0]Before making cars themselves, they looked for other buyers for their licensed engines, which led them to a bicycle maker near the Swiss border, Peugeot Frères Aînés, headed by Armand Peugeot. Though bicycles seem very far removed from cars today, they made many contributions to the early growth of the auto industry. The 1880s bicycle boom (stimulated by the invention of the chain-driven “safety” bicycle) seeded expertise in the construction of high-speed road vehicles with ball bearings and tubular metal frames. Many early cars resembled bicycles with an additional wheel or two, and chain drives for powering the rear wheels remained popular throughout the first few decades of automobile development. Cycling groups also became very effective lobbyists for the construction of smooth cross-country roads on which to ride their machines, literally paving the way for the cars to come.[2]Armand Peugeot decided to purchase Daimler engines from Panhard et Levassor and make cars himself. So, already by 1890 there were two French firms making cars with combustion engines. But French designers had not altogether neglected the possibility of running steam vehicles on ordinary roads. In fact, before ever ordering a Daimler engine, Peugeot had worked on a steam tricycle with the man who would prove to be the most persistent partisan of steam cars in France, Léon Serpollet.A steam-powered road vehicle was not, by 1890, a novel idea. It had been proposed countless times, even before the rise of steam locomotives: James Watt himself had first developed an interest in engines, all the way back in the 1750s, after his friend John Robison suggested building a steam carriage. But those who had tried to put the idea into practice had always found the result wanting. Among the problems were the bulk and weight of the engine and all its paraphernalia (boiler, furnace, coal), the difficulty of maintaining a stoked furnace and controlling steam levels (including preventing the risk of boiler explosion), and the complexity of operating the engine. The only kinds of steam road vehicles to find any success, were those that inherently required a lot of weight, bulk, and specialized training to operate—fire engines and steamrollers—and even those only appeared in the second half of the nineteenth century.[3]Consider Serpollet’s immediate predecessor in steam carriage building, the debauched playboy Comte Albert de Dion. He commissioned two toymakers, George Bouton and Charles Trépardoux to make several small steam cars in the 1880s. These coal-fueled machines took thirty minutes or more to build up a head of steam. In 1894 a larger Dion steam tractor finished first in one of the many cross-country auto races that had begun to spring up to help carmakers promote their vehicles. But the judges disqualified Dion’s vehicle on account of its impracticality: requiring both a driver and a stoker for its furnace, it was in a very literal sense a road locomotive. A discouraged Comte de Dion gave up the steam business, but De Dion-Bouton went on to be a successful maker of combustion automobiles and automobile engines.[4]This De Dion-Bouton steam tractor was disqualified from an auto race in 1894 as impractical.Coincidentally enough, Léon Serpollet and his brother Henri were, like Panhard and Levassor, makers of woodworking machines, and like Peugeot, they came from the Swiss borderlands in East-central France. Also like Panhard and Levassor, Léon studied engineering in Paris, in his case at the Conservatoire national des arts et métiers. But by the time he reached Paris, he and his brother had already concocted the invention that would lead them to the steam car: a “flash” boiler that instantly turned water to steam by passing it through a hot metal tube. This would allow the vehicle to start more quickly (though it still took time to heat the tube before the boiler could be used) and also alleviate safety concerns about a boiler explosion.The most important step to the (relative) success of the Serpollets’ vehicles, however, was when they replaced the traditional coal furnace with a burner for liquid, petroleum-based fuel. This went a long way towards removing the most disqualifying objections to the practicality of steam cars. Kerosene or gasoline weighed less and took up less space than an energy-equivalent amount of coal, and an operator could more easily throttle a liquid-fuel burner (by supplying it with more or less fuel) to control the level of steam.Figure 68: A 1902 Gardner-Serpollet steam car.With early investments from Peugeot and a later infusion of cash from Frank Gardner, an American with a mining fortune, the Serpollets built a business, first selling steam buses in Paris, then turning to small cars. Their steam powerplants generated more power than the combustion vehicles of the time, and Léon promoted them by setting speed records. In 1902, he surpassed seventy-five miles-per-hour along the promenade in Nice. At that time, a Gardner-Serpollet factory in eastern Paris was turning out about 100 cars per year. Though impressive numbers by the standards of the 1890s, already this was becoming small potatoes. In 1901 7,600 cars were produced in France, and 14,000 in 1903; the growing market left Gardner-Serpollet behind as a niche producer. Léon Serpollet made one last pivot back to buses, then died of cancer in 1907 at age forty-eight. The French steam car did not survive him.[5]Unlike in the U.S., steam car sales barely took off in France, and never had parity with the total sales of combustion engine cars from the likes of Panhard et Levassor, Peugeot, and many other makes. There was no moment of balance when it appeared that the future of automotive technology was up for grabs. Why this difference? We’ll have more to say about that later, after we consider the American side of the story.The Acme of the Steam CarAutomobile production in the United States lagged roughly five years behind France; and so it was in 1896 that the first small manufacturers began to appear. Charles and George Duryea (bicycle makers, again), were first off the block. Inspired by an article about Benz’ car, they built their own combustion-engine machine in 1893, and, after winning several races, they began selling vehicles commercially out of Peoria, Illinois in 1896. Several other competitors quickly followed.[6]Steam car manufacturing came slightly later, with the Whitney Motor Wagon Company and the Stanley brothers, both in the Boston area. The Stanleys, twins named Francis and Freelan (or F.E. and F.O.), were successful manufacturers of photographic dry plates, which used a dry emulsion that could be stored indefinitely before use, unlike earlier “wet” plates. They fell into the automobile business by accident, in a similar way to many others—by successfully demonstrating a car they had constructed as a hobby, drawing attention and orders. At an exhibition at the Charles River Park Velodrome in Cambridge, F.E. zipped around the field and up an eighty-foot ramp, demonstrating greater speed and power than any other vehicle present, including an imported combustion-engine De Dion tricycle, which could only climb the ramp halfway.[7]The Stanley brothers mounted in their 1897 steam car.The rights to the Stanley design, through a complex series of business details, ended up in possession of Amzi Barber, the “Asphalt King,” who used tar from Trinidad’s Pitch Lake to pave several square miles worth of roads across the U.S.[8] It was Barber automobiles, sold under the Locomobile brand, that formed the plurality of the 1,600 steam cars sold in the U.S. in 1900: the company sold 5,000 total between 1899 and 1902, at the quite-reasonable price of $600. Locomobiles were quiet and smooth in operation, produced little smoke or odor (though they did breathe great clouds of steam), had the torque required to accelerate rapidly and climb hills, and could smoothly accelerate by simply increasing the speed of the piston, without any shifting of gears. The rattling, smoky, single-cylinder engines of their combustion-powered competitors had none of these qualities.[9]Why then, did the steam car market begin to collapse after 1902? Twenty-seven makes of steam car first appeared in the U.S. in 1899 or 1900, mostly concentrated (like the Locomobile) in the Northeast—New York, Pennsylvania, and (especially) Massachusetts. Of those, only twelve continued making steam cars beyond 1902, and only one—the Lane Motor Vehicle Company of Poughkeepsie, New York—lasted beyond 1905. By that year, the Madison Square Garden car show had 219 combustion models on display, as compared to only twenty electric and nine steam.[10]Barber, the Asphalt King, was interested in cars, regardless of what made them go. As the market shifted to combustion, so did he, abandoning steam at the height of his own sales in 1902. But the Stanleys loved their steamers. Their contractual obligations to Barber being discharged in 1901, they went back into business on their own. One of the longest lasting holdouts, Stanley sold cars well into the 1920s (even after the death of Francis in a car accident in 1918), and the name became synonymous with steam. For that reason, one might be tempted to ascribe the death of the steam car to some individual failing of the Stanleys: “Yankee Tinkerers,” they remained committed to craft manufacturing and did not adopt the mass-production “Fordist” methods of Detroit. Already wealthy from their dry plate business, they did not commit themselves fully to the automobile, allowing themselves to be distracted by other hobbies, such as building a hotel in Colorado so that people could film scary movies there.[11]Some of the internal machinery of a late-model Stanley steamer: the boiler at top left, burner at center left, engine at top right, and engine cutaway at bottom right. [Stanley W. Ellis, Smogless Days: Adventures in Ten Stanley Steamers (Berkeley: Howell-North Books, 1971), 22]But, as we have seen, there were dozens of steam car makers, just as there were dozens of makers of combustion cars; no idiosyncrasies of the Stanley psychology or business model can explain the entire market’s shift from one form of power train to another—if anything it was the peculiar psychology of the Stanleys that kept them making steam cars at all, rather than doing the sensible thing and shifting to combustion. Nor did the powers that be put their finger on the scale to favor combustion engines.[12] How, then, can we explain both the precipitous rise of steam in the U.S. (as opposed to its poor showing in France) as well as its sudden fall?The steam car’s defects were as obvious as its advantages. Most annoying was the requirement to build up a head of steam before you could go anywhere: this took about ten minutes for the Locomobile. Whether starting or going, the controls were complex to manage. Scientific American described the “quite simple” steps required to get a Serpollet car going:A small quantity of alcohol is used to heat the burner, which takes about five minutes; then by the small pump a pressure is made in the oil tank and the cock opened to the burner, which lights up with a blue flame, and the boiler is heated up in two or three minutes. The conductor places the clutch in the middle position, which disconnects the motor from the vehicle and regulates the motor to the starting position, then puts his foot on the admission pedal, starting the motor with the least pressure and heating the cylinders, the oil and water feed working but slightly. When the cylinders are heated, which takes but a few strokes of the piston, the clutch is thrown on the full or wean speed and the feed-pumps placed at a maximum, continuing to feed by hand until the vehicle reaches a certain speed by the automatic feed, which is then regulated as desired.[13]Starting a combustion car of that era also required procedures long-since streamlined away—cranking the engine to life, adjusting the carburetor choke and spark plug timing—but even at the time most writers considered steamers more challenging to operate. Part of the problem was that the boilers were intentionally small (to allow them to build steam quickly and reduce the risk of explosion), which meant lots of hands-on management to keep the steam level just right. Nor had the essential thermodynamic facts changed – internal combustion, operating over a larger temperature gradient, was more efficient than steam. The Model T could drive fifteen to twenty miles on a gallon of fuel, the Stanley could go only ten, not to mention its constant thirst for water, which added another “fueling” requirement.[14]The rather arcane controls of a 1912 Stanley steamer. [Ellis, Smogless Days: Adventures in Ten Stanley Steamers, 26]The steam car overcame these disadvantages to achieve its early success in the U.S. because of the delayed start of the automobile industry there. American steam car makers, starting later, skipped straight to petroleum-fueled burners, bypassing all the frustrations of dealing with a traditional coal-fueled firebox, and banishing all associations between that cumbersome appliance and the steam car.At the same time, combustion automobile builders in the U.S. were still early in their learning curve compared to those in France. A combustion engine was a more complex and temperamental machine than a steam engine, and it took time to learn how to build them well, time that gave steam (and electric) cars a chance to find a market. The builders of combustion engines, as they learned from experience, rapidly improved their designs, while steam cars improved relatively little year over year.Most importantly, they never could get up and running as quickly as a combustion engine. In one of those ironies which history graciously provides to the historian, the very impatience that the steam age had brough forth doomed its final progeny, the steam car. It wasn’t possible to start up a steam car and immediately drive; you always had to wait for the car to be ready. And so drivers turned to the easier, more convenient alternative, to the frustration of steam enthusiasts, who complained of “[t]his strange impatience which is the peculiar quirk of the motorist, who for some reason always has been in a hurry and always has expected everything to happen immediately.”[15] Later Stanleys offered a pilot light that could be kept burning to maintain steam, but “persuading motorists, already apprehensive about the safety of boilers, to keep a pilot light burning all night in the garage proved a hard sell.”[16] It was too late, anyway. The combustion-driven automotive industry had achieved critical mass.The Afterlife of the Steam CarThe Ford Model T of 1908 is the most obvious signpost for the mass-market success of the combustion car. But for the moment that steam was left in the dust, we can look much earlier, to the Oldsmobile “curved dash,” which first appeared in 1901 and reached its peak in 1903, when 4,000 were produced, three times the total output of all steam car makers in that pivotal year of 1900. Ransom Olds, son of a blacksmith, grew up in Lansing, Michigan, and caught the automobile bug as a young man in 1887. Like many contemporaries, he built steamers at first (the easier option), but after driving a Daimler car at the 1893 Chicago World’s Fair, he got hooked on combustion. His Curved Dash (officially the Model R) still derived from the old-fashioned “horseless carriage” style of design, not yet having adopted the forward engine compartment that was already common in Europe by that time. It had a modest single-cylinder, five-horsepower engine tucked under the seats, and an equally modest top speed of twenty miles-per-hour. But it was convenient and inexpensive enough to outpace all of the steamers in sales.[17]The Oldsmobile “Curved Dash” was celebrated in song.The market for steam cars was reduced to driving enthusiasts, who celebrated its near-silent operation (excepting the hiss of the burner), the responsiveness of its low-end torque, and its smooth acceleration without any need for clunky gear-shifting. (There is another irony in the fact that late-twentieth century driving enthusiasts, disgusted by the laziness of automatic transmissions, would celebrate the hands-on responsiveness of manual shifters.) The steam partisan was offended by the unnecessary complexity of the combustion automobile. They liked to point out how few moving parts the steam car had.[18] To imagine the triumph of steam is to imagine a world in which the car remained an expensive hobby for this type of car enthusiast.Several entrepreneurs tried to revive the steamer over the years, most notably the Doble brothers, who brought their steam car enterprise to Detroit in 1915, intent on competing head-to-head with combustion. They strove to make a car that was as convenient as possible to use, with a condenser to conserve water, key-start ignition, simplified controls, and a very fast-starting boiler.But, meanwhile, car builders were steadily scratching off all of the advantages of steam within the framework of the combustion car. Steam cars, like electric cars, did not require the strenuous physical effort to get running that early, crank-started combustion engines did. But by the second decade of the twentieth century, car makers solved this problem by putting a tiny electric car powertrain (battery and motor) inside every combustion vehicle, to bootstrap the starting of the engine. Steam cars offered a smoother, quieter ride than the early combustion rattletraps, but more precisely machined, multi-cylinder engines with anti-knock fuel canceled out this advantage (the severe downsides of lead as an anti-knock agent were not widely recognized until much later). Steam cars could accelerate smoothly without the need to shift gears, but then car makers created automatic transmissions. In the 1970s, several books advocated a return to the lower-emissions burners of steam cars for environmental reasons, but then car makers adopted the catalytic converter.[19]It’s not that a steam car was impossible, but that it was unnecessary. Every year more and more knowledge and capital flowed into the combustion status quo, the cost of switching increased, and no sufficiently convincing reason to do so ever appeared. The failure of the steam car was not due to accident, not due to conspiracy, and certainly not due to any individual failure of the Stanleys, but due to the expansion of auto sales to people who cared more about getting somewhere than about the machine that got them there. Impatient people, born, ironically, of the steam age.

Read more
ARPANET, Part 3: The Subnet

With ARPANET, Robert Taylor and Larry Roberts intended to connect many different research institutions, each hosting its own computer, for whose hardware and software it was wholly responsible. The hardware and software of the network itself, however, lay in a nebulous middle realm, belonging to no particular site. Over the course of the years 1967-1968, Roberts, head of the networking project for ARPA’s Information Processing Techniques Office (IPTO), had to determine who should build and operate the network, and where the boundary of responsibility should lie between the network and the host institutions. The Skeptics The problem of how to structure the network was at least as much political as technical. The principal investigators at the ARPA research sites did not, as a body, relish the idea of ARPANET.  Some evinced a perfect disinterest in ever joining the network; few were enthusiastic. Each site would have to put in a large amount of effort to in order to let others share its very expensive, very rare computer. Such sharing had manifest disadvantages (loss of a precious resource), while its potential advantages remained uncertain and obscure. The same skepticism about resource sharing had torpedoed the UCLA networking project several years earlier. However, in this case, ARPA had substantially more leverage, since it had directly paid for all those precious computing resources, and continued to hold the purse strings of the associated research programs. Though no direct threats were ever made, no “or else,” issued, the situation was clear enough – one way or another ARPA would build its network, to connect what were, in practice, still its machines. Matters came to a head at a meeting of the principal investigators in Ann Arbor, Michigan, in the spring of 1967. Roberts laid out his plan for a network to connect the various host computers at each site. Each of the investigators, he said, would fit their local host with custom networking software, which it would use to dial up other hosts over the telephone network (this was before Roberts had learned about packet-switching). Dissent and angst ensued. Among the least receptive were the major sites that already had large IPTO-funded projects, MIT chief among them. Flush with funding for the Project MAC time-sharing system and artificial intelligence lab, MIT’s researchers saw little advantage to sharing their hard-earned resources with rinky-dinky bit players out west. Regardless of their stature, moreover, every site had certain other reservations in common. They each also had their own unique hardware and software, and it was difficult to see how they could even establish a simple connection with one another, much less engage in real collaboration. Just writing and running the networking software for their local machine would also eat up a significant amount of time and computer power. It was ironic yet surprisingly fitting that the solution adopted by Roberts to these social and technical problems came from Wes Clark, a man who regarded both time-sharing and networking with distaste. Clark, the quixotic champion of personal computers for each individual, had no interest in sharing computer resources with anyone, and kept his own campus, Washington University in St. Louis, well away from ARPANET for years to come. So it is perhaps not surprising that he came up with a network design that would not add any significant new drain on each site’s computing resources, nor require those sites to spend a lot of effort on custom software. Clark proposed setting up a mini-computer at each site which would handle all the actual networking functions. Each host would have to understand only how to connect to its local helpmate (later dubbed an Interface Message Processor, or IMP), which would then route on the message so that it reached the corresponding IMP at the destination. In effect, he proposed that ARPA give an additional free computer to each site, which would absorb most of the resource costs of the network. At a time when computers were still scarce and very dear, the proposal was an audacious one. Yet with the recent advent of mini-computers that cost just tens of thousands of dollars rather than hundreds, it fell just this side of feasible.1 While alleviating some of the concerns of the principal investigators about a network tax on their computer power, the IMP approach also happened to solve another political problem for ARPA. Unlike any other ARPA project to date, the network was not confined to a single research institution where it could be overseen by a single investigator. Nor was ARPA itself equipped to directly build and manage a large-scale technical project. It would have to hire a third party to do the job. The presence of the IMPs would provide a clear delineation of responsibility between the externally-m network and the locally-managed host computer. The contractor would control the IMPs and everything between them, while the host sites would each remain fully (and solely) responsible for the hardware and software on their own computer. The IMP Next, Roberts had to choose that contractor. The old-fashioned Licklider approach of soliciting a proposal directly from a favored researcher wouldn’t do in this case. The project would have to be put up for public bid like any other government contract. It took until July of 1968 for Roberts to prepare the final the details of the request for bids. About a half year had elapsed since the final major technical piece of the puzzle fell into place, with the revelation of packet-switching at the Gatlinburg conference. Two of the largest computer manufacturers, Control Data Corporation (CDC) and International Business Machines (IBM), immediately bowed out, since they had no suitable low-cost minicomputer to serve as the IMP. Honeywell DDP-516 Among the major remaining contenders, most chose Honeywell’s new DDP-516 computer, though some plumped instead for the Digital PDP-8. The Honeywell was especially attractive because it featured an input/output interface explicitly design to interact with real-time systems, for applications like controlling industrial machinery. Communications, of course, required similar real-time precision – if an incoming message were missed because the computer was busy doing other work, there was no second chance to capture it. By the end of the year, after strongly considering Raytheon, Roberts offered the job to the growing Cambridge firm of Bolt, Beranek and Newman. The family tree of interactive computing, was, at this date, still extraordinarily ingrown, and in choosing BBN Roberts might reasonably have been accused of a kind of nepotism. J.C.R. Licklider had brought  interactive computing to BBN before leaving to serve as the first director of IPTO, seed his intergalactic network, and mentor men like Roberts. Without Lick’s influence, ARPA and BBN would have been neither interested in nor capable of handling the ARPANET project. Moreover, the core of the team assembled by BBN to build the IMP came directly or indirectly from Lincoln Labs: Frank Heart (the team’s leader), Dave Walden, Will Crowther, and Severo Ornstein. Lincoln, of course, is where Roberts himself did his graduate work, and where a chance collision with Wes Clark had first sparked Lick’s excitement about interactive computing. But cozy as the arrangement may have seemed, in truth the BBN team was as finely tuned for real-time performance as the Honeywell 516. At Lincoln, they worked on computers that interfaced with radar systems, another application where data would not wait for the computer to be ready. Heart, for example, had worked on the Whirlwind computer as a student as far back as 1950, joined the SAGE project, and spent a total of fifteen years at Lincoln Lab. Ornstein had worked on the SAGE cross-telling protocol, for handing off radar track records from one computer to another, and later on Wes Clark’s LINC, a computer designed to support scientists directly in the laboratory, with live data. Crowther, now best known as the author of Colossal Cave Adventure, spent ten years building real-time systems at Lincoln, including the Lincoln Experimental Terminal, a mobile satellite communications station with a small computer to point the antenna and process the incoming signals.2 The IMP team at BBN. Frank Heart is the older man at center. Ornstein is on the far right, next to Crowther. The IMPs were responsible for understanding and managing the routing and delivery of messages from host to host. The hosts could deliver up to 8000 bytes at a time to their local IMP, along with a destination address. The IMP then sliced this into smaller packets which were routed independently to the destination IMP, across 50 kilobit-per-second lines leased from AT&T. The receiving IMP reassembled the pieces and delivered the complete message to its host. Each IMP kept a table that tracked which of their neighbors offered fastest route to reach each possible destination. This was updated dynamically based on information received from those neighbors, including whether they appeared to be unavailable (in which case the delay in that direction was effectively infinite). To meet the speed and throughput requirements specified by Roberts for all of this processing, Heart’s team crafted little poems in code. The entire operating program for the IMP required only about 12,000 bytes; the portion that maintained the routing tables only 300.3 The team also took several precautions to address the fact that it would be infeasible to have maintenance staff on site with every IMP. First, they equipped each computer with remote monitoring and control facilities. In addition to an automatic restart function that would kick in after power failure, the IMPs were programmed to be able to restart their neighbors by sending them a fresh instance of their operating software. To help with debugging and analysis, an IMP could be instructed to start taking snapshots of its state at regular intervals. The IMPs would also honor a special ‘trace’ bit on each packet, which triggered additional, more detailed logs. With these capabilities, many kinds of problems could be addressed from the BBN office, which acted as a central command center from which the status of the whole network could be overseen. Second, they requisitioned from Honeywell the military-grade version of the 516 computer, equipped with a thick casing to protect it from vibration and other environmental hazards. BBN intended this primarily as a “keep out” sign for curious graduate students, but nothing delineated the boundary between the hosts and the BBN-operated subnet as visibly as this armored shell. The first of these hardened cabinets, about the size of a refrigerator, arrived on site at the University of California, Los Angeles (UCLA) on August 30, 1969, just 8 months after BBN received the contract. The Hosts Roberts decided to start the network with four hosts – in addition to UCLA, there would be an IMP just up the coast at the University of California, Santa Barbara (UCSB), another at Stanford Research Institute (SRI) in northern California, and the last at the University of Utah. All were scrappy West Coast institutions looking to establish themselves in academic computing. The close family ties also continued, as two of the involved principal investigators, Len Kleinrock at UCLA and Ivan Sutherland at the University of Utah, were also Roberts’ old office mates from Lincoln Lab. Roberts also assigned two of the sites special functions within the network.  Doug Englebart of SRI had volunteered as far back as the 1967 principals meeting to set up a Network Information Center. Leveraging SRI’s sophisticated on-line information retrieval system, he would compile the telephone directory, so to speak, for ARPANET: collating information about all the resources available at the various host sites and making it available to everyone on the network. On the basis of Kleinrock’s expertise in analyzing network traffic, meanwhile, Roberts designated UCLA as the Network Measurement Center (NMC). For Kleinrock and UCLA, ARPANET was to serve not only as a practical tool but also as an observational experiment, from which data could be extracted and generalized to learn lessons that could be applied to improve the design of the network and its successors. But more important to the development of ARPANET than either of these formal institutional designations was a more informal and diffuse community of graduate students called the Network Working Group (NWG). The sub-net of IMPs allowed any host on the network to reliably deliver a message to any other; the task taken on by the Network Working Group was to devise a common language or set of languages that those hosts could use to communicate. They called these the “host protocols.” The word protocol, a borrowing from diplomatic language, was first applied to networks by Roberts and Tom Marill in 1965, to describe both the data format and the algorithmic steps that determine how two computers communicate with one another. The NWG, under the loose, de facto leadership of Steve Crocker of UCLA, began meeting regularly in the spring of 1969, about six months in advance of the delivery of the first IMP. Crocker was born and raised in the Los Angeles area, and attended Van Nuys High School, where he was a contemporary of two of his later NWG collaborators, Vint Cerf and Jon Postel4. In order to record the outcome of some of the group’s early discussions, Crocker developed one of the keystones of the ARPANET (and future Internet) culture, the “Request for comments” (RFC). His RFC 1, published April 7, 1969 and distributed to the future ARPANET sites by postal mail, synthesized the NWG’s early discussions about how to design the host protocol software. In RFC 3, Crocker went on to define the (very loose) process for all future RFCs: Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a NWG note is one sentence. …we hope to promote the exchange and discussion of considerably less than authoritative ideas. Like a “Request for quotation” (RFQ), the standard way of requesting bids for a government contract, an RFC invited responses, but unlike the RFQ, the RFC also invited dialogue. Within the distributed NWG community anyone could submit an RFC, and they could use the opportunity to elaborate on, question, or criticize a previous entry. Of course, as in any community, some opinions counted more than others, and in the early days the opinion of Crocker and his core group of collaborators counted for a great deal. In fact by July 1971, Crocker had left UCLA (while still a graduate student) to take up a position as a Program Manager at IPTO. With crucial ARPA research grants in his hands, he wielded undoubted influence, intentionally or not. Jon Postel, Steve Crocker, and Vint Cerf – schoolmates and NWG collaborators – in later years. The NWG’s initial plan called for two protocols. Remote login (or Telnet) would allow one computer to act like a terminal attached to the operating system of another, extending the interactive reach of any ARPANET time-sharing system across thousands of miles to any user on the network. The file transfer protocol (FTP) would allow one computer to transfer a file, such as a useful program or data set, to or from the storage system of another. At Roberts’ urging, however, the NWG added a third basic protocol beneath those two, for establishing a basic link between two hosts. This common piece was known as the Network Control Program (NCP). The network now had three conceptual layers of abstraction – the packet subnet controlled by the IMPs at the bottom, the host-to-host connection provided by NCP in the middle, and application protocols (FTP and Telnet) at the top. The Failure? It took until August of 1971 for NCP to be fully defined and implemented across the network, which by then comprised fifteen sites. Telnet implementations followed shortly thereafter, with the first stable definition of FTP arriving a year behind, in the summer of 1972. If we consider the state of ARPANET in this time period, some three years after it was first brought on-line, it would have to be considered a failure when measured against the resource-sharing dream envisioned by Licklider and carried into practical action by his protégé, Robert Taylor. To begin with, it was hard to even find out what resources existed on the network which one could borrow. The Network Information Center used a model of voluntary contribution – each site was expected to provided up-to-date information about its own data and programs. Although it would have collectively benefited the community for everyone to do so, each individual site had little incentive to advertise its resources and make them accessible, much less provide up-to-date documentation or consultation. Thus the NIC largely failed to serve as an effective network directory. Probably it’s most important function in those early years was to provide electronic hosting for the growing corpus of RFCs. Even if Alice at UCLA knew about a useful resource at MIT, however, an even more serious obstacle intervened. Telnet would get Alice to the log-in screen at MIT but no further. For Alice to actually access any program on the MIT host, she would have to make an off-line agreement with MIT to get an account on their computer, usually requiring her to fill out paperwork at both institutions and arrange for funding to pay MIT for the computer resources used. Finally, incompatibilities between hardware and system software at each site meant that there was often little value to file transfer, since you couldn’t execute programs from remote sites on your own computer. Ironically, the most notable early successes in resource sharing were not in the domain of interactive time-sharing that ARPANET was built to support, but in large-scale, old-school, non-interactive data-processing. UCLA added their underutilized IBM 360/91 batch-processing machine to the network and provided consultation by telephone to support remote users, and thus managed to significantly supplement the income of the computer center. The ARPA-funded ILLIAC IV supercomputer at the University of Illinois and the Datacomputer at the Computer Corporation of America in Cambridge also found some remote clients on ARPANET.5 None of these applications, however, came close to fully utilizing the network. In the fall of 1971, with fifteen host computers online, the network in total carried about 45 million bits of traffic per site per day, an average of 520 bits-per-second on a network of AT&T leased lines with a capacity of 50,000 bits-per-second each.6 Moreover, much of this was test traffic generated by the Network Measurement Center at UCLA. The enthusiasm of a few early adopters aside (such as Steve Carr, who made daily use of the PDP-10 at the University of Utah from Palo Alto7), not much was happening on ARPANET.8 But ARPANET was soon saved from any possible accusations of stagnation by yet a third application protocol, a little something called email. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996)  

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy:  One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1 In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2 Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff. Acceptable Use Wolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3 In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community. However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over. This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates.  From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis.  Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.   Dual-Use Networks Wolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control.  This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it.  The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4 Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.   A For-Profit Backbone MCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5 The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement. T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet.  Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET. It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume. PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on. Divestiture Rick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access. But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11 The Break-up Though Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12 Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone. When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber.  In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts. The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets.  AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone. However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.    The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries.  This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like.  Second Time Isn’t The Charm Prior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.   Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side.  The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S.  To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable.  The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it: allowed the RBOCs to compete in long-distance telephone markets, lifted restrictions forbidding the same entity from owning both broadcasting and cable services, axed the rules that prevented concentration of radio station ownership. The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network.  The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly.  Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services.  How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards.  The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home.  Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15 During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course.  Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward.  [Previous] [Next] Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000. “Remarks by Vice President Al Gore at National Press Club“, December 21, 1993. Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth. Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year. To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math. Office of Inspector General, “Review of NSFNET,” March 23, 1993. Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27. Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990. John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991. Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996. The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem. The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”. Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020. Goldstein, The Great Telecom Meltdown, 145. The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software. Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) Shane Greenstein, How the Internet Became Commercial (2015) Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018) Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007) Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Steamships, Part I: Crossing the Atlantic

For much of this story, our attention has focused on events within the isle of Great Britain, and with good reason: primed by the virtuous cycle of coal, iron, and steam, the depth and breadth of Britain’s exploitation of steam power far exceeded that found anywhere else, for roughly 150 years after the groaning, hissing birth cry of steam power with the first Newcomen engine. American riverboat traffic stands out as the isolated exception. But Great Britain, island though it was, did not stand aloof from the world. It engaged in trade and the exchange of ideas, of course, but it also had a large and (despite occasional setbacks) growing empire, including large possessions in Canada, South Africa, Australia, and India. The sinews of that empire necessarily stretched across the oceans of the world, in the form of a dominant navy, a vast merchant fleet, and the ships of the East India Company, which blurred the lines of military and commercial power: half state and half corporation. Having repeatedly bested all its would-be naval rivals—Spain, the Netherlands, and France—Britain had achieved an indisputable dominance of the sea. Testing the Waters The potential advantages of fusing steam power with naval power were clear: sailing ships were slaves to the whims of the atmosphere. A calm left them helpless, a strong storm drove them on helplessly, and adverse winds could trap them in port for days on end. The fickleness of the wind made travel times unpredictable and could steal the opportunity for a victorious battle from even the strongest fleet. In 1814, Sir Walter Scott took a cruise around Scotland, and the vicissitudes of travel by sail are apparent on page after page of his memoirs:  4th September 1814… Very little wind, and that against us; and the navigation both shoally and intricate. Called a council of war; and after considering the difficulty of getting up to Derry, and the chance of being windbound when we do get there, we resolve to renounce our intended visit to that town… 6th September 1814… When we return on board, the wind being unfavourable for the mouth of Clyde, we resolve to weigh anchor and go into Lamlash Bay. 7th September, 1814 – We had amply room to repent last night’s resolution, for the wind, with its usual caprice, changed so soon as we had weighed anchor, blew very hard, and almost directly against us, so that we were beating up against it by short tacks, which made a most disagreeable night…[1] As it had done for power on land, as it had done for river travel, so steam could promise to do for sea travel: bring regularity and predictability, smoothing over the rough chaos of nature. The catch lay in the supply of fuel. A sailing ship, of course, needed only the “fuel” it gathered from the air as it went along. A riverboat could easily resupply its fuel along the banks as it travelled. A steamship crossing the Atlantic would have to bring along its whole supply. Plan of the Savannah. It is evident that she was designed as a sailing ship, with the steam engine and paddles as an afterthought. Early attempts at steam-powered sea vessels bypassed this problem by carrying sails, with the steam engine providing supplementary power. The American merchant ship Savannah crossed the Atlantic to Liverpool in this fashion in 1819. But the advantages of on-demand steam power did not justify the cost of hauling an idle engine and its fuel across the ocean. Its owners quickly converted the Savannah back to a pure sailing ship.[2] MacGregor Laird had a better-thought-out plan in 1832 when he dispatched the two steamships built at his family’s docks, Quorra and Alburkah, along with a sailing ship, for an expedition up the River Niger to bring commerce and Christianity to central Africa. Laird’s ships carried sails for the open ocean and supplied themselves regularly with wooden fuel when coasting near the shore. The steam engines achieved their true purpose once the little task force reached the river, allowing the ships to navigate easily upstream.[3] Brunel Laird’s dream of transforming Africa ended in tatters, and in the death of most of his crew. But Laird himself survived, and he and his homeland would both have a role to play in the development of true ocean-going steamships. Laird, like the great Watt himself, was born in Greenock, on the Firth of Clyde, and Britain’s first working commercial steamboats originated on the Clyde, carrying passengers among Glasgow, Greenock, Helensburgh, and other towns. Scott took passage on such a ferry from Greenock to Glasgow in the midst of his Scottish journey, and the contrast is stark in his memoirs between his passages at sea and the steam transit on the Clyde that proceeded “with a smoothness of motion which probably resembles flying.”[4] The shipbuilders of the Clyde, with iron and coal closet a hand, would make such smooth, predictable steam journeys ever more common in the waters of and around Britain.  By 1822, they had already built forty-eight steam ferries of the sort on which Scott had ridden; in the following decade ship owners extended service out into the Irish Sea and English Channel with larger vessels, like David Napier’s 240-ton, 70-horsepower Superb and 250-ton and 100-horsepower Majestic.[5] Indeed, the most direct path to long-distance steam travel lay in larger hulls. Because of the buoyancy of water, steamships did not suffer rocket-equation-style negative returns on fuel consumption with increasing size. As the hull grew, its capacity to carry coal increased in proportion to its volume, while the drag the engines had to overcome (and thus the size of engine required) increased only in proportion to the surface area. Mark Beaufoy, a scholar of many pursuits but with a deep interest in naval matters, had shown this decisively in a series of experiments with actual hulls in water, published posthumously by his son in 1834.[6] In the late 1830s, two competing teams of British financiers, engineers, and naval architects emerged, racing to be the first to take advantage of this fact by creating a large enough steamship to make transatlantic steam travel technically and commercially viable. In a lucky break for your historian, the more successful team was led by the more vibrant figure, Isambard Kingdom Brunel: even his name oozes character. (His rival’s name, Junius Smith, begins strong but ends pedestrian.) Brunel’s unusual last name came from his French father, Marc Brunel; his even more unusual middle name came from his English mother, Sophia Kingdom; and his most unusual first name descends from some Frankish warrior of old.[7] The elder Brunel came from a prosperous Norman farming family. A second son, he was to be educated for the priesthood, but rebelled against that vocation and instead joined the navy in 1786. Forced to flee France in 1793 due to his activities in support of the royalist cause, he worked for a time as a civil engineer in New York before moving to England in 1799 to develop a mechanized process for churning out pulley blocks for the British navy with one of the great rising engineers of the day, Henry Maudslay.[8] The most famous image of Brunel, in front of the chains of his (and the world’s) largest steamship design in 1857. Young Isambard was born in 1806, began working for his father in 1822, and got the railroad bug after riding the Liverpool and Manchester line in 1831.  The Great Western Railway (GWR) company named Brunel as chief engineer in 1833, when he just twenty-seven years old. The GWR originated with a group of Bristol merchants who saw the growth of Liverpool, and feared that without a railway link to central Britain they would lose their status as the major entrepôt for British trade with the United States. It spanned the longest route of any railway to date, almost 120 miles from London to Bristol, and under Brunel’s guidance the builders of the GWR leveled, bridged, and tunneled that route at unparalleled cost). Brunel insisted on widely spaced rails (seven feet apart) to allow a smooth ride at high speed, and indeed GWR locomotives achieved speeds of sixty miles-per-hour, with average speeds of over forty miles-per-hour over long distances, including stops. Though the broad-gauge rails Brunel stubbornly fought for are long gone, the iron-ribbed vaults of the train sheds he designed for each terminus—Paddington Station in London and Temple Meads in Bristol—still stand and serve railroad traffic today.[9] The Great Western Railway " data-medium-file="https://cdn.accountdigital.net/Fhf3soIAhhg2rN0oqVUZMO0J_BSv" data-large-file="https://cdn.accountdigital.net/Fijn2JZJiY1iiPlxXFJdkKowbSRt?w=739" loading="lazy" width="1024" height="640" src="https://cdn.accountdigital.net/FhOeLf78Au4ONW9H2uRKenokq84m" alt="" class="wp-image-14501" srcset="https://cdn.accountdigital.net/FhOeLf78Au4ONW9H2uRKenokq84m 1024w, https://cdn.accountdigital.net/Fi_GgFaxxSyF_JLeBaV4adatJV4f 150w, https://cdn.accountdigital.net/Fhf3soIAhhg2rN0oqVUZMO0J_BSv 300w, https://cdn.accountdigital.net/Ft7LTlTqReE0HzUJR2_XF8PFniC4 768w, https://cdn.accountdigital.net/Fijn2JZJiY1iiPlxXFJdkKowbSRt 1154w" sizes="(max-width: 1024px) 100vw, 1024px">An engraving of Temple Mead, Bristol terminus of the Great Western Railway. According to legend, Brunel’s quest to build a transatlantic steamer began with an off-hand quip at a meeting of the Great Western directors in October 1835.[10] Someone grumbled over the length of the railway line, Brunel said something to the effect of: “Why not make it longer, and have a steamboat to go from Bristol to New York?” Though perhaps intended as a joke, Brunel’s remark spoke to the innermost dreams of the Bristol merchants, to be the indispensable link between England and America.  One of them, Thomas Guppy, decided to take the idea seriously, and convinced Brunel to do the same. Brunel, never lacking in self-confidence, did not doubt that his heretofore landbound engineering skills would translate to a watery milieu, but just in case he pulled Christopher Claxton (a naval officer) and William Patterson (a shipbuilder) in on the scheme. Together they formed a Great Western Steam Ship Company.[11] The Race to New York Received opinion still held that a direct crossing by steam from England to New York, of over 3,000 miles, would be impossible without refueling. Dionysius Lardner took to the hustings of the scientific world to pronounce that opinion. Dionysius Lardner, Brunel’s nemesis. One of the great enthusiasts and promoters of the railroad, Lardner was nonetheless a long-standing opponent of Brunel’s: in 1834 he had opposed Brunel’s route for the Great Western railway on the grounds that the gradient of Box Hill tunnel would cause trains to reach speeds of 120 miles-per-hour and thus suffocate the passengers.[12] He gave a talk to the British Association for the Advancement of Science in August 1836 deriding the idea of a Great Western Steamship, asserting that “[i]n proportion as the capacity of the vessel is increased, in the same ratio or nearly so must the mechanical power of the engines be enlarged, and the consumption of fuel augmented,” and that therefore a direct trip across the Atlantic would require a far more efficient engine than had ever yet been devised.[13] The Dublin-born Lardner much preferred his own scheme to drive a rail line across Ireland and connect the continents by the shortest possible water route: 2,000 miles from Shannon to Newfoundland. Brunel, however, firmly believed that a large ship would solve the fuel problem. As he wrote in a preliminary report to the company in 1836, certainly drawing on Beaufoy’s work: “…the tonnage increases as the cubes of their dimensions, while the resistance increases about as their squares; so that a vessel of double the tonnage of another, capable of containing an engine of twice the power, does not really meet with double the resistance.”[14] He, Patterson and Claxton agreed to target a 1400 ton, 400 horsepower ship. They would name her, of course, Great Western. In the post-Watt era, Britain boasted two great engine-building firms: Robert Napier’s in Glasgow in the North, and Maudslay’s in London in the south. After the death of Henry Maudslay, Marc Brunel’s former collaborator, in 1831, the business’ ownership passed to his sons. But they lacked their father’s brilliance; the key to  the firm’s future lay with the partner he had also bequeathed  to them, Joshua Field. Brunel and his father both had ties to Maudslay, and so they tapped Field to design the engine for their great ship. Field chose a “side-lever” engine design, so-called because a horizontal beam on the side of the engine rocking on a central pivot delivered power from the piston to the paddle wheels. This was the standard architecture for large marine engines, because it allowed the engine to be mounted deep in the hull, avoiding deck obstructions and keeping the ship’s center of gravity low. Field, however, added several novel features of his own devising. The most important of them was the spray condenser, which recycled some of the engine’s steam for re-use as fresh water for the boiler. This ameliorated the second-most pressing problem for long-distance steamships: the build-up of scale in the engine from saltwater.[15] The 236-foot-long, 35-foot-wide hull sported iron bracings to increase its strength (a contribution of Brunel), and cabins for 128 passengers. The extravagant, high-ceiling grand saloon provided a last, luxurious Brunel touch. By far the largest steamship yet built, Great Western would have towered over most other ships in the London docks where she was built.[16] The competing group around Junius Smith had not been idle. Smith, an American-born merchant who ran his business out of London had dreamed of a steam-powered Atlantic crossing ever since 1832, when while idling on a fifty-four day sail from England to New York; almost twice the usual duration. He formed the British and American Steam Navigation Company, and counted among his backers Macgregor Laird, the Scottish shipbuilder of the Niger River expedition. Their 1800-ton British Queen would boast a 500-horsepower engine, built by the Maudslay company’s Scottish rival, Robert Napier.[17] But Smith’s group fell behind the Brunel consortium (this despite the fact that Brunel still led the engineering on the not-yet-completed Great Western Railway); the Great Western would launch first. In a desperate stunt to be able to boast of making the first Atlantic crossing, British and American launched the channel steamer Sirius on April 4, 1838 from Cork on the west coast of Ireland, laden with fuel and bound for New York. Great Western left Bristol just four days later, with fifty-seven crew (fifteen of them just for stoking coal) to serve a mere seven passengers, each paying the princely sum of 35 guineas for passage.[18] The Steamer Great Western. H.R. Robinson. PAH8859 " data-medium-file="https://cdn.accountdigital.net/FjhWpIE1aFlqt2GYL0Hc-SWJMgY9" data-large-file="https://cdn.accountdigital.net/Flj-dfOyg0Ro7Cz2Hb7HjGMwK6WT?w=739" loading="lazy" width="1024" height="730" src="https://cdn.accountdigital.net/Fu9OpVwe0NXae4jdZHJyPyuWk0m9" alt="" class="wp-image-14505" srcset="https://cdn.accountdigital.net/Fu9OpVwe0NXae4jdZHJyPyuWk0m9 1024w, https://cdn.accountdigital.net/FuVJjI8esdiqejo2CGzqD47Y3gkI 150w, https://cdn.accountdigital.net/FjhWpIE1aFlqt2GYL0Hc-SWJMgY9 300w, https://cdn.accountdigital.net/FnNNJhcavKPniiW0ouC1ntqj7v0o 768w, https://cdn.accountdigital.net/Flj-dfOyg0Ro7Cz2Hb7HjGMwK6WT 1280w" sizes="(max-width: 1024px) 100vw, 1024px">A Lithograph of the Great Western. Despite three short stops to deal with engine problems and a near-mutiny by disgruntled coal stokers working in miserable conditions, Great Western nearly overtook Sirius, arriving in New York just twelve hours behind her. In total the crossing took less than sixteen days—about half the travel time of a fast sailing packet—with coal to spare in the bunkers. The ledger was not all positive: the clank of the engine, the pall of smoke and the ever-present coating of soot and coal dust drained the ocean of some of its romance; as historian Stephen Fox put it, “[t]he sea atmosphere, usually clean and bracing, felt cooked and greasy.” But sixty-six passengers ponied up for the return trip: “Already… ocean travelers had begun to accept the modernist bargain of steam dangers and discomforts in exchange for consistent, unprecedented speed.”[19] In that first year, Great Western puffed alone through Atlantic waters. Itmade four more round trips In 1838, eking out a small profit. The British Queen launched at last in July 1839, and British and American launched an even larger ship, SS President, the following year. Among the British Queen’s first passengers on its maiden voyage to New York was Samuel Cunard, a name that would resonate in ocean travel for a century to come, and an object lesson in the difference between technical and business success. In 1840 his Cunard Line began providing transatlantic service in four Britannia-class paddleships. Imitation Great Westerns (on a slightly smaller scale), they stood out not for their size or technical novelty but for their regularity and uniformity of service. But the most important factor in Cunard’s success was outmaneuvering the Great Western Steam Company in securing a contract with the Admiralty for mail service to Halifax. This provided a steady and reliable revenue stream—starting at 60,000 pounds a year—regardless of economic downturns. Moreover, once the Navy had come to depend on Cunard for speedy mail service it had little choice but to keep upping the payments to keep his finances afloat.[20] Thanks to the savvy of Cunard, steam travel from Britain to America, a fantasy in 1836 (at least according to the likes of Dionysius Lardner), had become steady business four years later. Brunel, however, had no patience for the mere making of money. He wanted to build monuments; creations to stand the test of time, things never seen or done before. So, when, soon after the launching of the Great Western, he began to design his next great steam ship, he decided he would build it with a hull of solid iron.

Read more
The Era of Fragmentation, Part 3: The Statists

In the spring of 1981, after several smaller trials, The French telecommunications administration (Direction générale des Télécommunications, or DGT), began a large-scale videotex experiment in a region of Brittany called Ille-et-Vilaine, named after its two main rivers. This was the prelude to the full launch of the system across l’Hexagone in the following year. The DGT called their new system Télétel, but before long everyone was calling it Minitel, a synecdoche that derived from the name of the lovable little terminals that were distributed free of charge, by the hundreds of thousands, to French telephone subscribers. Among all the consumer-facing information service systems in this “era of fragmentation” Minitel deserves our special attention, and thus its own chapter in this series, for three particular reasons. First, the motive for its creation. Other post, telephone, and telegraph authorities (PTTs) built videotex systems, but no other state invested as heavily in making it a success, nor gave so much strategic weight to that success. Entangled with hopes for a French economic and strategic renaissance, Minitel was meant not just to produce new telecom revenues or generate more network traffic, but to prime the pump for the entire French technology sector. Second, the extent of its reach. The DGT provided Minitel terminals to subscribers free of charge, and levied all charges at time of use rather than requiring an up-front subscription. This meant that, although many of them used the system infrequently,  more people had access to Minitel than to even the largest American on-line services of the 1980s, despite France’s much smaller population. The comparison to its nearest direct equivalent, Britain’s Prestel, which never broke 100,000 subscribers, is even more stark. Finally, there is the architecture of its backend systems. Every other commercial purveyor of digital services was a monolith, with all services hosted on their own machines. While they may have collectively formed a competitive market, each of their systems were structured internally as a command economy. Minitel, despite being the product of a state monopoly, was ironically the only system of the 1980s that created a free market for information services. The DGT, acting as an information broker rather than information supplier, provided one possible model for exiting the era of fragmentation. Playing Catch Up It was not by happenstance that the Minitel experiments began in Brittany. In the decades after World War II, the French government had deliberately seeded the region, whose economy still relied heavily upon agriculture and fishing, with an electronics and telecommunications industry. This included two major telecom research labs: the Centre Commun d’Études de Télévision et Télécommunications (CCETT) in Rennes, the region’s capital, and a branch of the Centre National d’Études des Télécommunications (CNET) in Lannion, on the northern coast. The CCETT lab in Rennes Themselves a product of an effort to bring a lagging region into the modern era, by the late 1960s and early 1970s these research departments found themselves playing catch up with their peers in other countries. The French phone network of the late 1960s was an embarrassment for a country that, under de Gaulle, wished to see itself as a resurgent world power. It still relied heavily on switching infrastructure built in the first decades of the century, and only 75% of the network was automated by 1967. The rest still depended on manual operators, which had been all but eliminated in the U.S. the rest of Western Europe. There were only thirteen phones for every 100 inhabitants of France, compared to twenty-one in neighboring Britain, and nearly fifty in the countries with the most advanced telecommunications systems, Sweden and the U.S. France therefore began a massive investment program of rattrapage, or “catch up,” in the 1970s. Rattrapage ramped up steeply after the 1974 election of Valéry Giscard d’Estaing to the presidency of France, and his appointment of a new director for the DGT, Gérard Théry. Both were graduates of France’s top engineering school, l’École Polytechnique, and both believed in the power of technology to improve society. Théry set about making the DGT’s bureaucracy more flexible and responsive and Giscard secured 100 billion francs in funding from Parliament for modernizing the telephone network, money that paid for the installation of millions more phones and the replacement of old hardware with computerized digital switches. Thus France dispelled its reputation as a sad laggard in telephony. But in the meantime new technologies had appeared in other nations that took telecommunications in new directions – videophone, fax, and the fusion of computer services with communication networks. The DGT wanted to ride the crest of this new wave, rather than having to play catch up again. In the early 1970s, Britain announced two separate teletex systems, which would deliver rotating screens of data to television sets in the blanking intervals in television broadcasts. CCETT, DGT’s joint venture with France’s television broadcaster, the Office de radiodiffusion-télévision française (ORTF) launched two projects in response. DIDON1 was modeled closely on the the British television broadcasting model, but ANTIOPE2 took a more ambitious tack, to investigate the delivery of screens of text independently of the communications channel. Bernard Marti in 2007 Bernard Marti headed the ANTIOPE team in Rennes. He was yet another polytechnichien (class of 1963), and had joined CCETT from ORDF, where he specialized in computer animation and digital television. In 1977, Marti’s team merged the ANTIOPE display technology with ideas borrowed from CNET’s TIC-TAC3, a system for delivering interactive digital services over telephone. This fusion, dubbed TITAN4, was basically equivalent to the British Viewdata system that later evolved into Prestel. Like ANTIOPE it used a television to display screens of digital information, but it allowed users to interact with the computer rather than merely receiving data passively. Moreover, both the commands to the computer and the screen data it returned passed over a telephone line, not over the air. Unlike Viewdata, TITAN supported a full alphabetic keyboard, not just a telephone keypad. In order to demonstrate the system at a Berlin trade fair, the team used France’s Transpac packet-switching network to mediate between the terminals and the CCETT computer in Rennes. Théry’s lab had assembled an impressive tech demo, but as of yet none of it had left the lab, and it had no obvious path to public use. Télématique In the fall of 1977, DGT director Gerard Théry, satisfied with how the modernization of the phone network was progressing, turned his attention to the British challenge in videotex. To develop a strategic response, he first looked to CCETT and CNET, where he found TITAN and TIC-TAC prototypes ready to be put to use. He turned these experimental raw materials over to his development office (the DAII) to be molded into products with a clear path to market and business strategy. The DAIIn recommended pursuing two projects: first, a videotex experiment to test out a variety of services in a town near Versailles, and second, investment in an electronic phone directory, intended to replace the paper phone book. Both would use Transpac as the networking backbone, and TITAN technology for the frontend, with color imagery, character-based graphics, and a full keyboard for input. An early experimental Télétel setup, before the idea of using the TV as the display was abandoned. The strategy the DAII devised for videotex differed from Britain’s in three important ways. First, whereas Prestel hosted all of the videotex content themselves, the DGT planned to serve only as a switchboard from which users could reach any number of different privately-hosted service providers, running any type of computer that could connect to Transpac and serve valid ANTIOPE data. Second, they decided to abandon the television as the display unit and go with custom, all-in-one terminals. People bought TVs to watch TV, the DGT leadership reasoned, and would not want to tie up their screen with new services like the electronic phone book. Moreover, cutting the TV set out of the picture meant that the DGT would not have to negotiate over the launch with their counterparts at Télédiffusion de France (TDF), the successor to the ORDF5. Finally, and most audaciously, France cracked the chicken-and-egg problem (that a network without users was unattractive to service providers and vice versa) by planning to lease those all-in-one videotex terminals free of charge. Despite these bold plans, however, videotex remained a second-tier priority for Théry. When it came to ensuring DGT’s place at the forefront of communications technology, his focus was on developing the fax into a nationwide consumer service. He believed that fax messaging could take over a huge portion of the market for written communication from the post office, whose bureaucrats the DGT looked upon as hidebound fuddy-duddies.  Théry’s priorities changed within months, however, with the completion of a government report in early 1978 entitled The Computerization of Society. Released to bookstores in a paperback edition in May, it sold 13,500 copies in its first month, and a total of 125,000 copies over the following decade, quite a blockbuster for a government report6 How did such a seemingly recondite topic engender such excitement? The authors, Simon Nora and Alain Minc, officers in the General Inspectorate of Finance, had been asked to write the report by the Giscard government in order to consider the threat and the opportunity presented by the growing economic and cultural significance of the computer. By the mid-1970s, it was becoming clear to most technically-minded intellectuals that computing power could and likely would be democratized, brought to the masses in the form of new computer-mediated services. Yet for decades, the United States had led the way in all forms of digital technology, and American firms held a seemingly unassailable grip on the market for computer hardware. The leaders of France considered the democratization of computers a huge opportunity for French society, yet they did not want to see France become a dependent satellite of a dominating foreign power. Nora and Minc’s reported presented a synthesis that resolved this tension, proposing a project that would catapult France into the post-modern age of information. The nation would go directly from trailing the pack in computing to leading it, by building the first national infrastructure for digital services – computing centers, databases, standardized networks – all of which would serve as the substrate for an open, democratic marketplace in digital services. This would, in turn, stimulate native French expertise and industrial capacity in computer hardware, software, and networking. Nora and Minc called this confluence of computers and communications télématique, a fusion of telecommunications and informatique (the french word for computing or computer science). “Until recently,” they wrote, computing… remained the privilege of the large and the powerful. It is mass computing that will come to the fore from now on, irrigating society, as electricity did. La télématique, however, in contrast to electricity, will not transmit an inert current, but information, that is to say, power. The Nora-Minc report, and the resonance it had within the Giscard government, put the effort to commercialize TITAN in a whole new light. Before the report, the DGT’s videotex strategy had been a response to their British rivals, intended to avoid being caught unprepared and forced to operate under a British technical standard for videotex. Had it remained only that, France’s videotex efforts might well have languished, ending up much like Prestel, a niche service for a few curious early adopters and a handful of business sectors that it found it useful. After Nora-Minc, however, videotex could only be construed as a central component of télématique, the basis for building a new future for the whole French nation, and it would receive more attention and investment than it might otherwise ever have hoped for. The effort to launch Minitel on a grand scale gained backing from the French state that might otherwise have failed to materialize, as it did for Théry’s plans for a national fax service, which dwindled to a mere Minitel printer accessory. This support included the funding to provide millions of terminals to the populace, free of charge. The DGT argued that the cost of the terminals would be offset by the savings from no longer printing and distributing the phone book, and from new network traffic stimulated by the Minitel service. Whether they sincerely believed this or not, it provided at least a fig leaf of commercial rationale for a massive industrial stimulus program, starting with Alcatel (paid billions of francs to manufacture terminals) and running downstream to the Transpac network, Minitel service providers, the computers purchased by those providers, and the software services required to run an on-line business. Man in the Middle In purely commercial terms, Minitel did not in fact contribute much to the DGT’s bottom line. It first achieved profitability on an annual basis in 1989, and if it ever achieved overall net profitability, it was not until well into its slow but terminal decline in the later 1990s. Nor did it achieve Nora and Minc’s aspiration to create an information-driven renaissance of French industry and society. Alcatel and other makers of telecom equipment did benefit from the contracts to build terminals, and the French Transpac network benefited from a large increase in traffic – though, unfortunately, with the X.25 protocol they turned out to have bet on the wrong packet-switching technology in the long-term. The thousands of Minitel service providers, however, mostly got their hardware and systems software from American providers. The techies who set up their own online services eschewed both the French national champion, Bull, and the dreaded giant of enterprise sales, IBM, in favor scrappy Unix boxes from the likes of Texas Instruments and Hewlett-Packard. So much for Minitel as industrial policy, what about its role in enervating French society with new information services, which would reach democratically into both the most elite arrondissements of Paris and the plus petit village of Picardy? Here it achieved rather more, though still mixed, success. The Minitel system grew rapidly, from about 120,000 terminals at its initial large-scale deployment in 1983, to over 3 million in 1987 and 5.6 million in 1990.7 However, with the exception of the first few minutes of the electronic phonebook, actually using those terminals cost money on a minute-by-minute basis, and there’s no doubt that usage was distributed much more unequally than the equipment. The most heavily used services, the online chat rooms, could easily burn hours of call time in an evening, at a base rate of 60 francs per hour (equivalent to about $8, more than double the U.S. minimum wage at the time). Nonetheless, nearly 30 percent of French citizens had access to a Minitel terminal at home or work in 1990. France was undoubtedly the most online country (if I may use that awkward adjective) in the world at that time. In that same year, the largest two online services in the United States, that colossus of computer technology, totaled just over a million subscribers, in a population of 250 million8. And the catalog of services that one could dial into grew as rapidly as the number of terminals – from 142 in 1983 to 7,000 in 1987 and nearly 15,000 in 1990. Ironically, a paper directory was needed to index all of the services available on this terminal that was intended to supplant the phone book. By the late 1980s that directory, Listel, ran to 650 pages.9 A man using a Minitel terminal Beyond the DGT-provided phone directory, services ran the gamut from commercial to social, and covered many of the major categories we still associate today with being online – shopping and banking, travel booking, chat rooms, message boards, games. To connect to a service, a Minitel user would dial an access number, most often 3615, which connected his phone line to a special computer in his local telephone switching office called a point d’accès vidéotexte, or PAVI. Once connected to the PAVI, the user could then enter a further code to indicate which Minitel service they wished to connect to. Companies plastered their access code in a mnemonic alphabetic form onto posters and billboards, much as they would do with website URLs in later decades: 3615 TMK, 3615 SM, 3615 ULLA. The 3615 code connected users into the PAVI’s “kiosk” billing system, introduced in 1984, which allowed Minitel to operate much like a news kiosk, offering a variety of wares for sale from different vendors, all from a single convenient location. Of the sixty francs charged per hour for basic kiosk services, 40 went to the service itself, and twenty to the DGT to pay for the use of the PAVI and the Transpac network. All of this was entirely transparent to the user; the charges would appear automatically on their next telephone bill, and they never needed to provide payment information to establish a financial relationship with the service provider. As access to the open internet began to spread in the 1990s, it became popular for the cognoscenti to retrospectively deprecate the online services of the era of fragmentation – the CompuServes, the AOLs – as “walled gardens”10. The implied contrast in the metaphor is to the freedom of the open wilderness. If CompuServe is a carefully cultivated plot of land, the internet, from this point of view, is Nature itself. Of course the internet is no more natural than CompuServe, nor Minitel. There is more than one way to architect an online service, and all of them are based on human choices. But if we stick to this metaphor of the natural versus the cultivated, Minitel sits somewhere in between. We might compare it to a national park. Its boundaries are controlled, regulated, and tolled, but within them one can wander freely and visit whichever wonders might strike your interest. DGT’s position in the middle of the market between user and service, with a monopoly on the user’s entry point and the entire communications pathway between the two parties, offered advantages over both the monolithic, all-inclusive service providers like CompuServe and the more open architecture of the later Internet. Unlike the former, once past the initial choke point, the system opened out into a free market of services unlike anything else available at the time. Unlike the latter, there was no monetization problem. The user paid automatically for computer time used, avoiding the need for the bloated and intrusive edifice of ad-tech that supports the bulk of the modern Internet. Minitel also offered a secure end-to-end connection. Every bit traveled only over DGT hardware, so as long as you trusted both the DGT and the service to which you were connected, your communications were safe from attackers. This system also had some obvious disadvantages compared to the Internet that succeeded it, however. For all is relative openness, one could not just turn on a server, connect it to the net, and be open for business. It required government pre-approval to make your server accessible via a PAVI. More fatally, the Minitel’s technical structure was terribly rigid, tied to a videotex protocol that, while advanced for the mid-1980s, appeared dated and extremely restrictive within a decade.11 It supported pages of text, in twenty-four rows of forty characters each (with primitive character-based graphics) and nothing more. None of the characteristic features of the mid-1990s World wide Web – free-scrolling text, GIFs and JPEGs, streaming audio, etc. –  were possible on Minitel. Minitel offered a potential road out of the era of fragmentation, but, outside of France, it was a road not taken. The DGT, privatized as France Télécom in 1988, made a number of efforts to export the Minitel technology, to Belgium, Ireland, and even the U.S. (via a system in San Francisco called 101 Online). But without the state-funded stimulus of free terminals, none of them had anything like the success of the original. And, with France Télécom, and most other PTTs around the world, now expected to fend for themselves as lean businesses in a competitive international market, the era when such a stimulus was politically viable had passed. Though the Minitel system did not finally cease operation until 2012, usage went into decline from the mid-1990s onward. In its twilight years it still remained relatively popular for banking and financial services, due to the security of the network and the availability of terminals with an accessory that could securely read and transmit data from banking and credit cards. Otherwise, french online enthusiasts increasingly turned to the Internet. But before we return to that system’s story, we have one last stop to visit on our tour of the era of fragmentation. [Previous] [Next] Further Reading Julien Mailland and Kevin Driscoll, Minitel: Welcome to the Internet (2017) Marie Marchand, The Minitel Saga (1988)    

Read more
ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more