Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at
universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy:

One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1

In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together
various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and
asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure
universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2

Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly
growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and
1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced
Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff.

Acceptable Use

Wolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But
then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s
connection to the ARPANET.3

In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private
sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the
initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community.

However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt
it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over.

This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one
example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it
nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates.

From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required
all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings
about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development
of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their
work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis.

Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses
of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that
encouraged more private investment into NSFNET and its peer networks.  

Dual-Use Networks

Wolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in
Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used
this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the
transition of the nation’s research and education infrastructure to private control.

This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980
Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it.

The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a
former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it
connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising
enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4

Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators
themselves.  

A For-Profit Backbone

MCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to
fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had
deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5

The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their
competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year
increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement.

T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit
commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any
clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing
an end-run around NSFNET to create their own, separate, commercial Internet.

Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out
any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET.

It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along
reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it
strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new
policy, allowing interconnection for a fee based on traffic volume.

PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or
CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge,
regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on.

Divestiture

Rick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt.
Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with
one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could
continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access.

But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation
of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named,
Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went
smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11

The Break-up

Though Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate
highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was
not up for consideration.12

Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the
organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data
networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone.

When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through
Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House
Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber.

In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to
stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System
into its constituent parts.

The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the
northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets.

AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential
customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone.

However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically
tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.  

The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth
fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries.

This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like.

Second Time Isn’t The Charm

Prior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff
and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.  

Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had
long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of
regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side.

The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The
new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its
own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S.

To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime
established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even
created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable.

The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here,
but a few examples provide a taste of its character. Among other things it:

  • allowed the RBOCs to compete in long-distance telephone markets,
  • lifted restrictions forbidding the same entity from owning both broadcasting and cable services,
  • axed the rules that prevented concentration of radio station ownership.

The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all
was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To
replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet,
the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this
would not be the case by default for newer services that did not use the dial telephone network.

The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled
their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local
loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone
networks, they would inject competition into a market previously dominated by the problem of natural monopoly.

Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or
describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services.

How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review
however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to
tiny hobby bulletin boards.

The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service
offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening
that introducing competitors would halt their imminent plans for bringing fiber to the home.

Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable,
telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or
cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the
incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15

During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had
followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course.

Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the
Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look
at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward.

[Previous] [Next]

  1. Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000.
  2. Remarks by Vice President Al Gore at National Press Club“, December 21, 1993.
  3. Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth.
  4. Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article
    entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year.
  5. To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity.
    See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math.
  6. Office of Inspector General, “Review of NSFNET,” March 23, 1993.
  7. Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27.
  8. Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990.
  9. John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991.
  10. Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996.
  11. The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel.
    But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have
    done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers),
    but the management of DNS still remains a thorny problem.
  12. The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on
    Infohighway
    ,” Current, June 20, 1994. Unsurprisingly, it went nowhere.
  13. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”.
  14. Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020.
  15. Goldstein, The Great Telecom Meltdown, 145.
  16. The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software.

Further Reading

Janet Abatte, Inventing the Internet (1999)

Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996)

Shane Greenstein, How the Internet Became Commercial (2015)

Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018)

Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007)

Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

The Hobby Computer Culture

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] From 1975 through early 1977, the use of personal computers remained almost exclusively the province of hobbyists who loved to play with computers and found them inherently fascinating. When BYTE magazine came out with its premier issue in 1975, the cover called computers “the world’s greatest toy.” When Bill Gates wrote about the value of good software in the spring of 1976, he framed his argument in terms of making the computer interesting, not useful: “…software makes the difference between a computer being a fascinating educational tool for years and being an exciting enigma for a few months and then gathering dust in the closet.”[1] Even as late as 1978, an informed observer could still consider interest in personal computers to be exclusive to a self-limiting community of hobbyists. Jim Warren, editor of Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia, predicted a maximum market of one million home computers, expecting them to be somewhat more popular than ham radio, which attracted about 300,000.[2] A survey conducted by BYTE magazine in late 1976 shows that these hobbyists were well-educated (72% had at least a bachelor’s degree), well-off (with a median annual income of $20,000, or $123,000 in 2025 dollars), and overwhelmingly (99%) male. Based on the letters and articles appearing in BYTE in that same centennial year of 1976, it is clear that what interested these hobbyists above all was the computers themselves: which one to buy, how to build it, how to program it, how to expand it and to accessorize it.[3] Discussion of practical software applications appeared infrequently. One intrepid soul went so far as to hypothesize a microcomputer-based accounting program, but he doesn’t seem to have actually written it. When  mention of software appeared it came most often in the form of games. The few with more serious scientific and statistical work in mind for their home computer complained of the excessive discussion of “super space electronic hangman life-war pong.” Star Trek games were especially popular:  In July, D.E. Hipps of Miami advertised a Star Trek BASIC game for sale for $10; in August, Glen Brickley of Florissant, Missouri wrote about demoing his “favorite version of Star Trek” for friends and neighbors; and in August, BYTE published, with pride, “the first version of Star Trek to be printed in full in BYTE” (though the author consistently misspelled “phasers” as “phasors”). Most computer hobbyists were electronic hobbyists first, and the electronics hobby grew up side-by-side with modern science fiction, and shared its fascination with the possibilities of future technology. We can guess that this is what drew them to this rare piece of popular culture that took the future and the “what-ifs” it poses seriously, rather than treating it as a mere backdrop for adventure stories.[4] The June 1976 issue of Interface is one of many examples of the hobbyists’ ongoing fascination with Star Trek. Other than a shared interest in computers—and, apparently, Star Trek—three kinds of organizations brought these men together: local clubs, where they could share expertise in software and hardware and build a sense of belonging and community; magazines like BYTE where they could learn about new products and get project ideas; and retail stores, where they could try out the latest models and shoot the shit with fellow enthusiasts. The computer hobbyists were also bound by a force more diffuse than any of these concrete social forms: a shared mythology of the origins of hobby computing that gave broader social and cultural meaning to their community. The Clubs The most famous computer club of all, of course, is the Homebrew Computer Club, headquartered in Silicon Valley, whose story is well documented in several excellent sources, especially Steven Levy’s book, Hackers. Its fame is well-deserved, for its role as the incubator of Apple Computer, if nothing else. But the focus of the historical literature on Homebrew as the computer club has tended to distort the image of American personal computing as a whole. The Homebrew Computer Club had a distinctive political bent, due to the radical left leanings of many of its leading members, including co-founder Fred Moore. In 1959, Moore had gone on hunger strike against the Reserve Officers’ Training Corps (ROTC) program at Berkeley, which had been compulsory for all students since the nineteenth century. He later became a draft resister and published a tract against institutionalized learning, Skool Resistance. Yet even the bulk of Homebrew’s membership stubbornly stuck to technical hobbyist concerns, despite Moore’s efforts to turn their attention to social causes such as aiding the disabled or protesting nuclear weapons. To the extent that personal computing had a politics, it was a politics of independence, not social justice.[5] Cover of the second Homebrew Computer Club newsletter, with sketches of members. Only Fred Moore is labeled, but the man with glasses on the far right is likely Lee Felsenstein. Moreover, excitement about personal computing was not at all a phenomenon confined to the Bay Area. By the summer of 1975, Altair shipments had begun in earnest, and clubs formed across the United States and beyond where enthusiasts could share information and ask for help with their new (or prospective) machines. The movement continued to grow as new companies sprang up and shipped more hobby machines. Over the course of 1976, dozens of clubs advertised their existence or attempted to find a membership through classifieds in BYTE, from the Oregon Computer Club headquartered in Portland (with a membership of forty-nine), to a proposed club in Saint Petersburg, Florida, mooted by one Allen Swan. But, as one might expect, the largest and most successful clubs were concentrated in and around major metropolitan areas with a large pool of existing computer professionals, such as Los Angeles, Chicago, and New York City.[6] The Amateur Computer Group of New Jersey convened for the first time in June 1975, in under the presidency of Sol Libes. Libes, a professor at Union County College, was another of those computer lovers working on their own home computers for years before the arrival of the Altair, who then suddenly found themselves joined by hundreds of like-minded hobbyists once computing became somewhat more accessible. Libe’s club grew to 1,600 members by the early 1980s, had a newsletter and software library, sponsored the annual Trenton Computer Festival, and is likely the only organization from the hobby computer years other than Apple and Microsoft to still survive today.[7] The Chicago Area Computer Hobbyist Exchange attracted several hundred members to its first meeting at Northwestern University in the summer of 1975. Like many of the larger clubs, they organized information exchange around “special interest groups” for each brand of computer (Digital Group, IMSAI, Altair, etc.). The club also gave birth to one of the most significant novel software applications to emerge from the personal computer hobby, the bulletin board system—we will have more to say on that later in this series.[8] The most ambitious—one might say hubristic—of the clubs was the Southern California Computer Society (SCCS) of Los Angeles, founded in Don Tarbell’s apartment in June of 1975. Within the year the club could boast of a glossy club magazine(in contrast to the cheap newsletters of most clubs) called Interface, plans to develop a public computer center, and—in answer to the challenge of Micro-Soft BASIC—ideas about distributing their own royalty-free program library, including “’branch’ repositories that would reproduce and distribute on a local basis.”[9] Not content with a regional purview, the leadership also encouraged the incorporation of far-flung club chapters into their organization; in that spirit, they changed their name in early 1977 to the International Computer Society. Several chapters opened in California, and more across the U.S, from Minnesota to Virginia, but interest in SCCS/ICS chapters could be found as far away as Mexico City, Japan, and New Zealand. Across all of these chapters, the group accumulated about 8,000 members.[10] The whole project, however, ran atop a rickety foundation of amateur volunteer work, and fell apart under its own weight. First came the breakdown in the relationship between the club and the publisher of Interface, Bob Jones. Whether frustrated with the club’s failure to deliver articles to fill the magazine (his version), or greedy to make more money as a for-profit enterprise (the club’s version), Jones broke away to create Interface Age, leaving SCCS scrambling to start up its own replacement magazine. Expensive lawsuits flew in both directions. Then came the mismanagement of the club’s group buy program: intended to save members money by pooling their purchases into a large-scale order with volume discounts, it instead lost thousands of members’ dollars to a scammer: “a vendor,” as one wry commenter put it “who never vended” (the malefactor traded under the moniker of “Colonel Winthrop.”)[11] The December 1976 issues of SCCS Interface and Interface Age. Which is authentic, and which the impostor? More lawsuits ensued. Squeezed by money troubles, the club leadership raised dues to $15 annually, and sent out a plea for early renewal and prepayment of multiple years’ dues. The club magazine missed several issues in 1977, then ceased publication in September. The ICS sputtered on into 1978 (Gordon French of Processor Technology announced his candidacy for the club presidency in March), then disappeared from the historical record.[12] Whatever the specific historic accidents that brought down SCCS, the general project—a grand non-profit network that would provide software, group buying programs and other forms of support to its members—was doomed by larger historical forces. Though many clubs survived into the 1980s or beyond, they waned in significance with the maturing of commercial software and the turn of personal computer sellers away from hobbyists and towards the larger and more lucrative consumer and business markets. Newer computer products no longer required access to secret lore to figure out what to do with them, and most buyers expected to get any support they did need from a retailer or vendor, not to rely on mutual support networks of other buyers. One-to-one commercial relations between buyer and seller became more common than the many-to-many communal webs of the hobby era. The Retailers The first buyers of Altair could not find it in any shop. Every transaction occurred via a check sent to MITS, sight unseen, in the hopes of receiving a computer in exchange. This way of doing businesses suited the hardcore enthusiast just fine, but anyone with uncertainty about the product—whether they wanted a computer at all, which model was best, how much memory or other accessories they needed—was unlikely to bite. It had disadvantages for the manufacturer, too. Every transaction incurred overhead for payment processing and shipping, and demand was uncertain and unpredictable week to week and month to month. Without any certainty about how many buyers would send in checks next month, they had to scale up manufacturing carefully or risk overcommitting and going bust. Retail computer shops would alleviate the problems of both sides of the market. For buyers, they provided the opportunity to see, touch, and try out various computer models, and get advice from knowledgeable salespeople. For sellers, they offered larger, more predictable orders, improving their cash flow and reducing the overhead of managing direct sales. The very first computer shop appeared around the same time when the clubs began spreading, in the summer of 1975. But they did not open in large numbers until 1976, after the hardcore enthusiasts had primed the pump for further sales to those who had seen or heard about the computers being purchased by their friends or co-workers. The earliest documented computer shop, Dick Heiser’s Computer Store, opened in July 1975 in a 1,000-square-foot store front on Pico Boulevard in West Los Angeles. Heiser had attended the very first SCCS meeting in Don Tarbell’s apartment, and, seeing the level of excitement about Altair, signed up to become the first licensed Altair dealer. Paul Terrell’s Byte Shop followed later in the year in Mountain View, California. In March of 1976, Stan Veit’s Computer Mart opened on Madison Avenue in New York City and Roy Borrill’s Data Domain in Bloomington, Indiana (home to Indiana University). Within a year, stores had sprouted across the United States like spring weeds: five hundred nation-wide by July 1977.[13] Paul Terrell’s Byte Shop at 1063 El Camino Real in Mountain View. Ed Roberts tries to enforce an exclusive license on Altair dealers, based on the car dealership franchise model. But the industry was too fast-moving and MITS too cash- and capital-strapped to make this workable. Hungry new competitors, from IMSAI to Processor Technology, entered the market constantly with new-and-improved models. Many buyers weren’t satisfied with only Altair offerings, MITS couldn’t supply dealers with enough stock to satisfy those who were, and they undercut even their few loyal dealers by continuing to offer direct sales in order to keep as much cash as possible flowing in. Even Dick Heiser, founder of the original Los Angeles Computer Store, broke ties with MITS in late 1977, unable to sustain an Altair-only partnership.[14] Dick Heiser with a customer at The Computer Store in Los Angeles in 1977. Not only is the teen here playing a Star Trek game, a picture of the ubiquitous starship Enterprise can be seen hanging in the background. [Photo by George Birch, from Benj Edwards, “Inside Computer Stores of the 1970s and 1980s,” July 13, 2022] Given the number of competing computer makers, retailers ultimately had the stronger position in the relationship. Manufacturers who could satisfy the desires of the stores for reliable delivery of stock and robust service and customer support would thrive, while the others withered.[15] But independent dealers faced competition of their own. Chain stores could extract larger volume discounts from manufacturers and build up regional or even national brand recognition. Byte Shop, for example, expanded to fifty locations by March 1978. The most successful chain was ComputerLand, run by the same Bill Millard who had founded IMSAI. Though he later claimed everything was “clean and appropriate,” Millard clearly extracted money and employee time from the declining IMSAI in order to get his new enterprise off the ground. As the company’s chronicler put it, “There was magic in ComputerLand. Started on just Milliard’s $10,000 personal investment, losing $169,000 in its maiden year, the fledgling company required no venture capital or bank loans to get off the ground.” Some small dealers, such as Veit’s Computer Mart, responded by forming a confederacy of independent dealers under a shared front called “XYZ Corporation” that they could use to buy computers with volume discounts.[16] A ComputerLand ad from the February 1978 issue of BYTE. Note that the store offers many of the services that most people could have only found in a club in 1975 or 1976: assistance with assembly, repair, and programming. The Publishers Just like manufacturers, retailers faced their own cash flow risks: outside the holiday season they might suffer from long dry spells without many sales. The early retailers typically solved this by simply not carrying inventory: they took customer orders until they accumulated a batch of ten or so computers from the same manufacturer, then filled all of the orders at once. But a big boon for their cash flow woes came in the form of publications that sold for much less than a computer, but at a much higher and steadier volume, especially the rapidly growing array of computer magazines.[17] BYTE was both the first of the national computer magazines, and the most successful. Launched in New Hampshire in the late summer of 1975, by 1978 it built up a circulation of 140,000 issues per month. It got a head start by cribbing thousands of addresses from the mailing lists of manufacturers such as Nat Wadsworth’s Connecticut-based SCELBI, one of the proto-companies of the pre-Altair era. But, like so much of the hobby computer culture, BYTE also had direct ancestry in the radio electronics hobby.[18] Conflict among the three principal actors has muddled the story of its origins. Wayne Green, publisher of a radio hobby magazine called 73 in Peterborough, New Hampshire, started printing articles about computers in 1974, and found that they were wildly popular. Virginia Londner Green, his ex-wife, worked at the magazine as a business manager. Carl Helmers, a computer enthusiast in Cambridge, Massachusetts, authored and self-published a newsletter about home computers. One of the Greens learned of Helmers’ newsletter, and one or more of the three came up with the idea of combining Helmers’ computer expertise with the infrastructure and know-how from 73 to launch a professional-quality computer hobby magazine.[19] The cover of BYTE‘s September 1976 0.01-centennial issue (i.e., one year anniversary). The phrase “cyber-crud” and the image of a fist on the shirt of the man at center both come from Ted Nelson’s Computer Lib/Dream Machines. Also, these people really liked Star Trek. Within months, for reasons that remain murky, Wayne Green found himself ousted by his ex-wife, who took over publishing of BYTE, with Helmers as editor. Embittered, Green launched a competing magazine, which he wanted to call Kilobyte, but was forced to change to Kilobaud. Thus began a brief period in which Peterborough, with a population of about 4,000, served as a global hub of computer magazine publishing.[20] Another magazine, Personal Computing, spun off from MITS in Albuquerque. Dave Bunnell, hired as a technical writer, had become so fond of running the company newsletter Computer Notes, that he decided to go into publishing on his own. On the West Coast, in addition to the aforementioned Interface Age, there was also Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia—conceived by Stanford lecturer Dennis Allison and computer evangelist Bob Albrecht (Dennis and Bob making “Dobb”), and edited by the hippie-ish Jim Warren, who drifted into computers after being fired from a position teaching math at a Catholic school for holding (widely-publicized) nude parties. Bunnell (right) with Bill Gates. This photo probably dates to sometime in the early 1980s. Computer books also went through a publishing boom. Adam Osborne, born to British parents in Thailand and trained as a chemical engineer, began writing texts for computer companies after losing his job at Shell Oil in California. When Altair arrived, it shook him with the same sense of revelation that so many other computer lovers had experienced. He whipped out a new book, Introduction to Microcomputers, and put it out himself when his previous publishers declined to print it. A highly technical text, full of details on Boolean logic and shift registers, it nonetheless sold 20,000 copies within a year to buyers eager for any information to help them understand and use their new machines.[21] The magazines served several roles. They offered up a cornucopia of content to inform and entertain their readers: industry news, software listings, project ideas, product announcements and reviews, and more. One issue of Interface Age even came with a BASIC implementation inscribed onto a vinyl record, ready to be loaded directly into a computer as if from a cassette reader. The magazines also provided manufacturers with a direct advertising and sales channel to thousands of potential buyers—especially important for smaller makers of computers or computer parts and accessories, whose wares were unlikely to be found in your local store. Finally, they became the primary texts through with the culture of the computer hobbyist was established and promulgated.[22] Each of the magazines had its own distinctive character and personality. BYTE was the magazine for the established hobbyist and tried to cover it all: hardware, software, community news, book reviews, and more. But the hardcore libertarian streak of founding editor Carl Helmers (an avid fan of Ayn Rand) also shone through in the slant of some of its articles. Wayne Green’s Kilobaud, with its spartan cover (title and table of contents only), appealed especially those with an interest in starting a business to make money off of their interest in computers. The short-lived ROM spoke to the humanist hobbyist, offering longer reports and think-pieces. Dr. Dobb’s had an amateur, free-wheeling aesthetic and tone not far removed from an underground newsletter. In keeping with its origins as a vehicle to publish Tiny BASIC (a free Microsoft BASIC alternative), itfocused on software listings. Creative Computing also had a software bent, but as a pre-Altair magazine designed to target users of BASIC in schools and universities, it took a more lighthearted and less technical tone, while Bunnell’s Personal Computing opened its arms to the beginner, with the message that computing was for everyone.[23] The Mythology of the Microcomputer Running through many of these early publications can be found a common narrative, a mythology of the microcomputer. To dramatize it: Until recently, darkness lay over the world of computing. Computers, a font of intellectual power, had served the interests only of the elite few. They lay solely in the hands of large corporate and government bureaucracies. Worse yet, even within those organizations, an inner circle of priests mediated access to the machine: the ordinary layperson could not be allowed to approach it. Then came the computer hobbyist. A Prometheus, a Martin Luther, and a Thomas Jefferson all wrapped into one, he ripped the computer and the knowledge of how to use it from the hands of the priests, sharing freedom and power with the masses. The “priesthood” metaphor came from Ted Nelson’s 1974 book, Computer Lib/Dream Machines, but became a powerful means for the post-Altair hobbyist to define himself against what came before. The imagery came to BYTE magazinein an October 1976 article by Mike Wilbur and David Fylstra: The movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history. Until now, computers were understood by only a select few who were revered almost as befitted the status of priesthood.[24] In this cartoon from Wilbur and Fylstra’s article on the “computer priesthood,” the sinister “HAL” (aka IBM) finds himself chagrined by the spread of hobby computerists. BYTE editor Carl Helmers made the historical connection with the Enlightenment explicit: Personal computing as practiced by large numbers of people will help end the concentration of apparent power in the “in” group of programmers and technicians, just as the enlightenment and renaissance in Europe brought about a much wider understanding beginning in the 14th century.[25] The notion that computing had been jealously guarded by the powerful and kept away from the people can be found as early as June 1975, in the pages Homebrew Computer Club newsletter. In the words of club co-founder Fred Moore: The evidence is overwhelming the people want computers… Why did the Big Companies miss this market? They were busy selling overpriced machines to each other (and the government and military). They don’t want to sell directly to the public.[26] In the first collected volume of Dr. Dobb’s Journal, editor Jim Warren sounded the same theme of a transition from exclusivity to democracy in more eloquent language: …I slowly come to believe that the massive information processing power which has traditionally been available only to the rich and powerful in government and large corporations will truly become available to the general public. And, I see that as having a tremendous democratizing potential, for most assuredly, information–ability to organize and process it–is power. …This is a new and different kind of frontier. We are part of the small cadre of frontiersmen who are exploring it. exploring this new frontier.[27] Personal Computing editor Dave Bunnell further emphasized the potential for the computer as a political weapon against entrenched bureaucracy: …personal computers have already proliferated beyond most government regulation. People already have them, just like (pardon the analogy) people already have hand guns. If you have a computer, use it. It is your equalizer. It is a way to organize and fight back against the impersonal institutions and the catch-22 regulations of modern society.[28] The journalists and social scientists who began to write the first studies of the personal computer in the mid-1980s lapped up this narrative, which provided a heroic framing for the protagonists of their stories. They gave it new life and a much broader audience in books like Silicon Valley Fever (“Until the mid-1970s when the microcomputer burst on the American scene, computers were owned and operated by the establishment–government, big corporations, and other large institutions”) and Fire in the Valley (“Programmers, technicians, and engineers who worked with large computers all had the feeling of being ‘locked out’ of the machine room… there also developed a ‘computer priesthood’… The Altair from MITS breached the machine room door…”)[29] This way of telling the history of the hobby computer gave deeper meaning to a pursuit that looked frivolous on the surface: paying thousands of dollars for a machine to play Star Trek. And, like most myths, it contained elements of truth. There was a large installed base of batch-processing systems, surrounded by a contingent of programmers denied direct access to the machine. Between the two there did stand a group of technicians whose relation to the computer was not unlike the relation of the pre-Vatican II priest to the Eucharist. But in promoting this myth, the computer hobbyists denied their own parentage, obscuring the time-sharing and minicomputer cultures that had made the hobby computer possible and from which it had borrowed most of its ideas. The Altair was not an ex nihilo response to an oppressive IBM batch-processing culture that had made access to computers impossible. The announcement of Altair had called it the “world’s first minicomputer kit”: it was the fulfillment of the dream of owning your own minicomputer, a type of computer most of its buyers had already used. It could not have been successful if thousands of people hadn’t already gotten hooked on the experience of interacting directly with a time-sharing system or minicomputer. This self-confident hobby computer culture, however—with its clubs, its local shops, its magazines, and its myths—would soon be subsumed by a larger phenomenon. From this point forward, no longer will nearly every major character in the story of the personal computer have a background in hobby electronics or ham radio. No longer will nearly all the computer makers and buyers alike be computer lovers who found their passion on mainframe, minicomputer, or time-sharing systems. In 1977, the personal computer entered a new phase of growth, led by a new class of businessmen who targeted the mass market.

Read more
From ACS to Altair: The Rise of the Hobby Computer

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] The Early Electronics Hobby A certain pattern of technological development recurred many times in the decades around the turn of the twentieth century: a scattered hobby community, tinkering with a new idea, develops it to the point where those hobbyists can sell it as a product. This sets off a frenzy of small entrepreneurial firms, competing to sell to other hobbyists and early adopters. Finally, a handful of firms grow to the point where they can drive down costs through economies of scale and put their smaller competitors out of business. Bicycles, automobiles, airplanes, and radio broadcasting all developed more or less in this way. The personal computer followed this same pattern; indeed, it marks the very last time that a “high-tech” piece of hardware emerged from this kind of hobby-led development. Since that time, new hardware technology has typically depended on new microchips. That is a capital barrier far too high for hobbyists to surmount; but as we have seen, the computer hobbyists lucked into ready-made microchips created for other reasons, but already suited to their purposes. The hobby culture that created the personal computer was historically continuous with the American radio hobby culture of the early twentieth-century, and, to a surprising degree, the foundations of that culture can be traced back to the efforts of one man: Hugo Gernsback. Gernsback (born Gernsbacher, to well-off German Jewish parents) came to the United States from Luxembourg in 1904 at the age of nineteen, shortly after his father’s death. Already fascinated by electrical equipment, American culture, and the fiction of Jules Verne and H.G. Wells, he started a business, the Electro Importing Company, in Manhattan, that offered both retail and mail-order sales of radios and related equipment. His company catalog evolved into a magazine, Modern Electrics, and Gernsback evolved into a publisher and community builder (he founded the Wireless Association of America in 1909 and the Radio League of America in 1915), a role he relished for the rest of his working life.[1] Gernsback (foreground) giving an over-the-air lecture on the future of radio. From his 1922 book, Radio For All, p. 229. The culture that Gernsback nurtured valued hands-on tinkering and forward-looking futurism, and in fact viewed them as two sides of the same coin. Science fiction (“scientifiction,” as Gernsback called it) writing and practical invention went hand in hand, for both were processes for pulling the future into the present. In a May 1909 article in Modern Electrics, for example, Gernsback opined on the prospects for radio communication with Mars: “If we base transmission between the earth and Mars at the same figure as transmission over the earth, a simple calculation will reveal that we must have the enormous power of 70,000 K. W. to our disposition in order to reach Mars,” and went on to propose a plan for building such a transmitter within the next fifteen or twenty years. As science fiction emerged as its own genre with its own publications in the 1920s (many of them also edited by Gernsback), this kind of speculative article mostly disappeared from the pages of electronic hobby magazines. Gernsback himself occasionally dropped in with an editorial, such as a 1962 piece in Radio-Electronics on computer intelligence, but the median electronic magazine article had a much more practical focus. Readers were typically hobbyists looking for new projects to build or service technicians wanting to keep up with the latest hardware and industry trends.[2] Nonetheless, the electronic hobbyists were always on the lookout for the new, for the expanding edge of the possible: from vacuum tubes, to televisions, to transistors, and beyond. It’s no surprise that this same group would develop an early interest in building computers. Nearly everyone who we find building (or trying to build) a personal or home computer prior to 1977 had close ties to the electronic hobby community. The Gernsback story also highlights a common feature of hobby communities of all sorts. A subset of radio enthusiasts, seeing the possibility of making money by fulfilling the needs of their fellow hobbyists, started manufacturing businesses to make new equipment for hobby projects, retail businesses to sell that equipment, or publishing businesses to keep the community informed on new equipment and other hobby news. Many of these enterprises made little or no money (at least at first), and were fueled as much by personal passion as by the profit motive; they were the work of hobby-entrepreneurs. It was this kind of hobby-entrepreneur who would first make personal computers available to the public. The First Personal Computer Hobbyists The first electronic hobbyist to take an interest in building computers, whom we know of, was Stephen Gray. In 1966, he founded the Amateur Computer Society (ACS), an organization that existed mainly to produce a series of quarterly newsletters typed and mimeographed by Gray himself. Gray has little to say about his own biography in the newsletter or in later reflections on the ACS. He reveals that he worked as an editor of the trade magazine Electronics, that he lived in Manhattan and then Darien, Connecticut, that he had been trying to build a computer of his own for several years, and little else. But he clearly knew the radio hobby world. In the fourth, February 1967, number of his newsletter, he floated the idea of a “Standard Amateur Computer Kit” (SACK) that would provide an economical starting point for new hobbyists, writing that,[3] Amateur computer builders are now much like the early radio amateurs. There’s a lot of home-brew equipment, much patchwork, and most commercial stuff is just too expensive. The ACS can help advance the state of the amateur computer art by designing a standard amateur computer, or at least setting up the specs for one. Although the mere idea of a standard computer makes the true blue home-brew types shudder, the fact is that amateur radio would not be where it is today without the kits and the off-the-shelf equipment available.[4] By the Spring of 1967, Gray had found seventy like-minded members through advertisements in trade and hobby publications, most of them in the United States, but a handful in Canada, Europe, and Japan. We know little about the backgrounds or motivations of these men (and they were exclusively men), but when their employment is mentioned, they are found at major computer, electronics, or aerospace firms; at national labs; or at large universities. We can surmise that most worked with or on computers as part of their day job. A few letter writers disclose prior involvement in hobby electronics and radio, and from the many references to attempts to imitate the PDP-8 architecture, we can also guess that many members had some association with DEC minicomputer culture. It is speculative but plausible to guess that the 1965 release of the PDP-8 might have instigated Gray’s own home computer project and the later creation of the ACS. Its relatively low price, compact size, and simple design may have catalyzed the notion that home computers lay just out of reach, at least for Gray and his band of like-minded enthusiasts. Whatever their backgrounds and motivations, the efforts of these amateurs to actually builda computer proved mostly fruitless in these early years. The January 1968 newsletter reported a grand total of two survey respondents who possessed an actual working computer, though respondents as a whole had sunk an average of two years and $650 on their projects ($6,000 in 2024 dollars). The problem of assembling one’s own computer would daunt even the most skilled electronic hobbyist: no microprocessors existed, nor any integrated circuit memory chips, and indeed virtually no chips of any kind, at least at prices a “homebrewer” could afford. Both of the two complete computers reported in the survey were built from hand-wired transistor logic. One was constructed from the parts of an old nuclear power system control computer, PRODAC IV. Jim Sutherland took the PRODAC’s remains home from his work at Westinghouse after its retirement, and re-dubbed it the ECHO IV (for Electronic Computing Home Operator). Though technically a “home” computer, to borrow an existing computer from work was not a path that most would-be home-brewers could follow. This hardly had the makings of a technological revolution. The other complete “computer,” the EL-65 by Hans Ellenberger of Switzerland, on the other hand, was truly an electronic desktop calculator; it could perform arithmetic ably enough, but could not be programmed. [5] The Emergence of the Hobby-Entrepreneur As integrated circuit technology got better and cheaper, the situation for would-be computer builders gradually improved. By 1971, the first, very feeble, home computer kits appeared on the market, the first signs of Gray’s “SACK.” Though neither used a microprocessor, they took advantage of the falling prices of integrated circuits: the CPU of each consisted of dozens of small chips wired together. The first was the National Radio Institute (NRI) 832, the hardware accompaniment to a computer technician course disseminated by the NRI, and priced at about $500. Unsurprisingly, the designer, Lou Freznel, was a radio hobby enthusiast, and a subscriber to Stephen Gray’s ACS Newsletter. But the NRI 832 is barely recognizable as a functional computer: it had a measly sixteen 8-bit words of read-only memory, configured by mechanical switches (with an additional sixteen bytes of random-access memory available for purchase).[6] OLYMPUS DIGITAL CAMERA " data-medium-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=300" data-large-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=739" loading="lazy" width="1024" height="684" src="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=1024" alt="" class="wp-image-14940">The NRI 832. The switches on the left were used to set the values of the bits in the tiny memory. The banks of lights at the top left and right, showing the binary values of the program counter and accumulator, were the only form of output  [vintagecomputer.net]. The $750 Kenbak-1 that appeared the same year was nominally more capable, with 256 bytes of memory, though implemented with shift-register chips (accessible one bit at a time), not random-access memory. Indeed, the entire machine had a serial-processing architecture, processing only one bit at a time through the CPU, and ran at only about 1,000 instructions per second—very slow for an electronic computer. Like the NRI 832, it offered only switches as input and only a small panel of display lights for showing register contents as output. Its creator, John Blankenbaker, was a radio lover from boyhood before enrolling as an electronics technician in the Navy. He began working on computers in the 1950s, beginning with the Bureau of Standards SEAC. Intrigued by the possibility of bringing a computer home, he tinkered with spare parts for making his own computer for years, becoming his own private ACS. By 1971 he thought he had a saleable device that could be used for teaching programming, and he formed the eponymous “Kenbak” company to sell it.[7] Blankenbaker was the first of the amateur computerists to try to bring his passion to market; the first hobby-entrepreneur of the personal computer. He was not the most successful. I found no records of the sales of the NRI 832, but by Blankenbaker’s own testimony, only forty-four Kenbak-Is were sold. Here were home computer kits readily available at a reasonable price, four years before Altair. Why did they fall flat? As we have seen, most members of the Amateur Computer Society had aimed to make a PDP-8 or something like it; this was the most familiar computer of the 1960s and early 1970s, and provided the mental model for what a home computer could and should be. The NRI 832 and Kenbak-I came nowhere close to the capabilities of a PDP-8, nor were they designed to be extensible or expandable in any way that might allow them to transcend their basic beginnings. These were not machines to stir the imaginative loins of the would-be home computer owner. Hobby-Entrepreneurship in the Open These early, halting steps towards a home computer, from Stephen Gray to the Kenbak-I, took place in the shadows, unknown to all but a few, the hidden passion of a handful of enthusiasts exchanging hand-printed newsletters. But several years later, the dream of a home computer burst into the open in a series of stories and advertisements in major hobby magazines. Microprocessors had become widely available. For those hooked on the excitement of interacting one-on-one with a computer, the possibility of owning their own machine felt tantalizing close. A new group of hobby-entrepreneurs now tried to make their mark by providing computer kits to their fellow enthusiasts, with rather more success than NRI and Kenbak. The overture came in the fall of 1973, with Don Lancaster’s “TV Typewriter,” featured on the cover of the September issue of Radio-Electronics (a Gernsback publication, though Gernsback himself was, by then, several years dead). Lancaster, like most of the people we have met in this chapter, was an amateur “ham” radio operator and electronics tinkerer. Though he had a day job at Goodyear Aerospace in Phoenix, Arizona, he figured out how to make a few extra bucks from his hobby by publishing projects in magazines and selling pre-built circuit boards for those projects via a Texas hobby firm called Southwest Technical Products (SWTPC). The 1973 Radio-Electronics TV Typewriter cover. His TV Typewriter was, of course, not a computer at all, but the excitement it generated certainly derived from its association with computers. One of many obstacles to a useful home computer was the lack of a practical output device: something more useful than the handful of glowing lights that the Kenbak-I sported, but cheaper and more compact than the then-standard computer input/output device, a bulky teletype terminal. Lancaster’s electronic keyboard, which required about $120 in parts, could hook up to an ordinary television and turn it into a video text terminal, displaying up to sixteen lines of thirty-two characters each. Shift-registers continued to be the only cheap form of semiconductor memory, and so that was what Lancaster used for storing the characters to be displayed on screen. Lancaster gave the parts list and schematic to the TV Typewriter away for free, but made money by selling pre-built subassemblies via SWTPC that saved buyers time and effort, and by publishing guidebooks likethe TV Typewriter Cookbook.[8] The next major landmark appeared six months later in a ham radio magazine, QST, named after the three-letter ham code for “calling all stations.” A small ad touted the availability of “THE TOTALLY NEW AND THE VERY FIRST MINI-COMPUTER DESIGNED FOR THE ELECTRONIC/COMPUTER HOBBYIST” with kit prices as low as $440. This was the SCELBI 8-H, the first computer kit based around a microprocessor, in this case the Intel 8008. Its creator, Nat Wadsworth, lived in Connecticut, and became enthusiastic about the microprocessor after attending a seminar given by Intel in 1972, as part of his job as an electrical engineer at an electronics firm. Wadsworth was another ham radio enthusiast, and already enough of a personal computing obsessive to have purchased a surplus DEC PDP-8 at a discount for home use (he paid “only” $2,000, about $15,000 in 2024 dollars). Since his employer did not share his belief in the 8008, he looked for another outlet for his enthusiasm, and teamed up with two other engineers to develop what became the SCELBI-8H (for SCientific ELectronic BIological). Their ads drew thousands of responses and hundreds of orders over the following months, though they ended up losing money on every machine sold.[9] A similar machine appeared several months later, this time as a hobby magazine story, on the cover the July 1974 issue of Radio-Electronics: “Build the Mark-8 Minicomputer,” ran the headline (notice again the “minicomputer” terminology: a PDP-8 of one’s own remained the dream). The Mark-8 came from Jonathan Titus, a grad student from Virginia, who had built his own 8008-based computer and wanted to share the design with the rest of the hobby. Unlike SCELBI, he did not sell it as a complete machine or even a kit: he expected the Radio-Electronics reader to buy and assemble everything themselves. That is not to say that Titus made no money: he followed a hobby-entrepreneur business model similar to Don Lancaster’s, offering an instructional guidebook for $5, and making some pre-made boards available for sale through a retailer in New Jersey, Techniques, Inc. The 1974 Mark-8 Radio-Electronics cover. The SCELBI-8H and Mark-8 looked much more like a “real” minicomputer than the NRI 832 or Kenbak-I. A hobbyist hungry for a PDP-8-like machine of their own could recognize in this generation of machines something edible, at least. Both used an eight-bit parallel processor, not an antiquated bit-serial architecture, came with one kilobyte of random-access memory, and were designed to support textual input/output devices. Most importantly both could be extended with additional memory or I/O cards. These were computers you could tinker with, that could become an ongoing hobby project in and of themselves. A ham radio operator and engineering student in Austin, Texas named Terry Ritter spent over a year getting his Mark-8 fully operational with all of the accessories that he wanted, including an oscilloscope display and cassette tape storage.[10] In the second half of 1974, a community of hundreds of hobbyists like Ritter began to form around 8008-based computers, significantly larger than the tiny cadre of Amateur Computer Society members. In September 1974, Hal Singer began publishing the Mark-8 User Group Newsletter (later renamed the Micro-8 Newsletter) for 8008 enthusiastsout of his office at the Cabrillo High School Computer Center in Lompoc, California. He attracted readers from all across the country: California and New York, yes, but also Iowa, Missouri, and Indiana. Hal Chamberlain started the Computer Hobbyist newsletter two months later. Hobby entrepreneurship expanded around the new machines as well: Robert Suding formed a company in Denver called the Digital Group to sell a packet of upgrade plans for the Mark-8.[11] The first tender blossoms of a hobby computer community had begun to emerge. Then another computer arrived like a spring thunderstorm, drawing whole gardens of hobbyists up across the country and casting the efforts of the likes of Jonthan Titus and Hal Singer in the shade. It, too, came as a response to the arrival of the Mark-8, by a rival publication in search of a blockbuster cover story of their own. Altair Arrives Art Salsberg and Les Solomon, editors at Popular Electronics, were not oblivious to the trends in the hobby, and had been on the lookout for a home computer kit they could put on their cover since the appearance of the TV Typewriter in the fall of 1973. But the July 1974 Mark-8 cover story at rival Radio-Electronics threw a wrench in their plans: they had an 8008-based design of their own lined up, but couldn’t publish something that looked like a copy-cat machine. They needed something better, something to one-up the Mark-8. So, they turned to Ed Roberts. He had nothing concrete, but had pitched Solomon a promise that he could build a computer around the new, more powerful Intel 8080 processor. This pitch became Altair—named, according to legend, by Solomon’s daughter, after the destination of the Enterprise in the Star Trek episode “Amok Time”—and it set the hobby electronics world on fire when it appeared as the January 1975 Popular Electronics cover story. The famous Popular Electronics Altair cover story. Altair, it should be clear by now, was continuous with what came before: people had been dreaming of and hacking together home computers for years, and each year the process became easier and more accessible, until by 1974 any electronics hobbyist could order a kit or parts for a basic home computer for around $500. What set the Altair apart, what made it special, was the sheer amount of power it offered for the price, compared to the SCELBI-8H and Mark-8. The Altair’s value proposition poured gasoline onto smoldering embers, it was an accelerant that transformed a slowly expanding hobby community into a rapidly expanding industry. The Altair’s surprising power derived ultimately from the nerve of MITS founder Ed Roberts. Roberts, like so many of his fellow electronics hobbyists, had developed an early passion for radio technology that was honed into a professional skill by technical training in the U.S. armed forces—the Air Force, in Roberts’ case. He founded Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque with fellow Air Force officer Forrest Mims to sell electronic telemetry modules for model rockets. A crossover hobby-entrepreneur business, this straddled two hobby interests of the founders, but did not prove very profitable. A pivot in 1971 to sell low-cost kits to satiate the booming demand for pocket calculators, on the other hand, proved very successful—until it wasn’t. By 1974 the big semiconductor firms had vertically integrated and driven most of the small calculator makers out of business. For Roberts, the growing hobby interest in home computers offered a chance to save a dying MITS, and he was willing to bet the company on that chance. Though already $300,000 in debt, he secured a loan of $65,000 from a trusting local banker in Albuquerque, in September 1974. With that money, he negotiated a steep volume discount from Intel by offering to buy a large quantity of “ding-and-dent” 8080 processors with cosmetic damage. Though the 8080 listed for $360, MITS got them for $75 each. So, while Wadsworth at SCELBI (and builders assembling their own Mark-8s) were paying $120 for 8008 processors, MITS was paying nearly half that for a far better processor.[12] It is hard to overstate what a substantial leap forward in capabilities the 8080 represented: it ran much faster than the 8008, integrated more capabilities into a single chip (for which the 8008 required several auxiliary chips), could support four times as much memory, and had a much more flexible 40-pin interface (versus the 18 pins on the 8008). The 8080 also referenced a program stack an external memory, while the 8008 had a strictly size-limited on-CPU stack, which limited the software that could be written for it. The 8080 represented such a large leap forward that, until 1981, essentially the entire personal and home computer industry ran on the 8080 and two similar designs: the Zilog Z80 (a processor that was software-compatible with the 8080 but ran at higher speeds), and the MOS Technology 6502 (a budget chip with roughly the same capabilities as the 8080).[13] The release of the Altair kit at a total price of $395 instantly made the 8008-based computers irrelevant. Nat Wadsworth of SCELBI reported that he was “devastated by appearance of Altair,” and “couldn’t understand how it could sell at that price.” Not only was the price right, the Altair also looked more like a minicomputer than anything before it. To be sure, it came standard with a measly 256 bytes of memory and the same “switches and lights” interface as the ancient kits from 1971. It would take quite a lot of additional money and effort to turn into a fully functional computer system. But it came full of promise, in a real case with an extensible card slot system for adding additional memory and input/output controllers. It was by far the closest thing to a PDP-8 that had ever existed at a hobbyist price point—just as the Popular Electronics cover claimed: “World’s First Minicomputer Kit to Rival Commercial Models.” It made the dream of the home computer, long cherished by thousands of computer lovers, seem not merely imminent, but immanent: the digital divine made manifest. And this is why the arrival of the MITS Altair, not of the Kenbak-I or the SCELBI-8H, is remembered as the founding event of the personal computer industry.[14] All that said, even a tricked-out Altair was hardly useful, in an economic sense. If pocket calculators began as a tool for business people, and then became so cheap that people bought them as a toy, the personal computer began as something so expensive and incapable that only people who enjoyed them as a toy would buy them. Next time, we will look at the first years of the personal computer industry: a time when the hobby computer producers briefly flourished and then wilted, mostly replaced and outcompeted by larger, more “serious” firms. But a time when the culture of the typical computer user remained very much a culture of play. Appendix: Micral N, The First Useful Microcomputer There is another machine sometimes cited as the first personal computer: the Micral N. Much like Nat Wadsworth, French engineer François Gernelle was smitten with the possibilities opened up by the Intel 8008 microprocessor, but could not convince his employer, Intertechnique, to use it in their products. So, he joined other Intertechnique defectors to form Réalisation d’Études Électroniques (R2E), and began pursuing some of their erstwhile company’s clients. In December 1972, R2E signed an agreement with one of those clients, the Institut National de la Recherche Agronomique (INRA, a government agronomical research center), to deliver a process control computer for their labs at fraction of the price of a PDP-8. Gernelle and his coworkers toiled through the winter in a basement in the Paris suburb of Châtenay-Malabry to deliver a finished system in April 1973, based on the 8008 chip and offered at a base price of 8,500 francs, about $2,000 in 1973 dollars (one fifth the going rate for a PDP-8).[15] The Micral N was a useful computer, not a toy or a plaything. It was not marketed and sold to hobbyists, but to organizations in need of a real-time controller. That is to say, it served the same role in the lab or factory floor that minicomputers had served for the previous decade. It can certainly be called a microcomputer by dint of its hardware. But the Altair lineage stands out because it changed how computers were used and by whom; the microprocessor happened to make that economically possible, but it did not automatically make every machine into which it was placed a personal computer. The Micral N looks very much like the Altair on the outside, but was marketed entirely differently [Rama, Cc-by-sa-2.0-fr]. Useful personal computers would come, in time. But the demand that existed for a computer in one’s own home or office in the mid-1970s came from enthusiasts with a desire to tinker and play on a computer, not to get serious business done on one. No one had yet written and published the productivity software that would even make a serious home or office computer conceivable. Moreover, it was still far too expensive and difficult to assemble a comprehensive office computer system (with a display, ample memory, and external mass storage for saving files) to attract people who didn’t already love working on computers for their own sake. Until these circumstances  changed, which would take several years, play reigned unchallenged among home computer users. The Micral N is an interesting piece of history, but it is an instructive contrast with the story of the personal computer, not a part of it.

Read more
Britain’s Steam Empire

The British empire of the nineteenth century dominated the world’s oceans and much of its landmass: Canada, southern and northeastern Africa, the Indian subcontinent, and Australia. At its world-straddling Victorian peak, this political and economic machine ran on the power of coal and steam; the same can be said of all the other major powers of the time, from also-ran empires such as France and the Netherlands, to the rising states of Germany and the United States. Two technologies bound the far-flung British empire together, steamships and the telegraph; and the latter, which might seem to represent a new, independent technical paradigm based on electricity, depended on the former. Only steamships, who could adjust course and speed at will regardless of prevailing winds, could effectively lay underwater cable.[1] A 1901 map of the cable network of the Eastern Telegraph Company (which later became Cable & Wireless) shows the pervasive commercial and imperial power of Victorian London. Not just an instrument of imperial power, the steamer also created new imperial appetites: the British empire and others would seize new territories just for the sake of provisioning their steamships and protecting the routes they plied. Within this world system under British hegemony, access to coal became a central economic and strategic factor. As the economist Stanley Jevons wrote in his 1865 treatise on The Coal Question: Day by day it becomes more obvious that the Coal we happily possess in excellent quality and abundance is the Mainspring of Modem Material Civilization. …Coal, in truth, stands not beside but entirely above all other commodities. It is the material energy of the country — the universal aid — the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back into the laborious poverty of early times.[2] Steamboats and the Projection of Power As the states of Atlantic Europe—Portugal and Spain, then later the Netherlands, England, and France—began to explore and conquer along the coasts of Africa and Asia in the sixteenth and seventeenth centuries, their cannon-armed ships proved one of their major advantages. Though the states of India and Indonesia had access to their own gunpowder weaponry, they did not have the ship-building technology to build stable firing platforms for large cannon broadsides. The mobile fortresses that the Europeans brought with them allowed them to dominate the sea lanes and coasts, wresting control of the Indian Ocean trade from the local powers.[3] What they could not do, however, was project power inland from the sea. The galleons and later heavily armed ships of the Europeans could not sail upriver. In this era, Europeans rarely could dominate inland states. When it did happen, as in India, it typically required years or decades of warfare and politicking, with the aid of local alliances. The steamboat, however, opened the rivers of Africa and Asia to lightning attacks or shows of force: directly by armed gunboats themselves, or indirectly through armies moving upriver supplied by steam-powered craft. We already know, of course, how Laird used steamboats in his expedition up the Niger in 1832. Although his intent was purely commercial, not belligerent, he had demonstrated the that interior of Africa could be navigated with steam. When combined with quinine to protect European settlers from malaria, the steamboat would help open a new wave of imperial claims on African territory. But even before Laird’s expedition, the British empire had begun to experiment with the capabilities of riverine steamboats. British imperial policy in Asia still operated under the corporate auspices of the East India Company (EIC), not under the British government, and in 1824 the EIC went to war with Burma over control of territories between the Burmese Empire and British India, in what is now Bangladesh. It so happened that the company had several steamers on hand, built in the dockyards of Calcutta (now Kolkata), and the local commanders put them to work in war service (much as Andrew Jackson had done with Shreve’s Enterprise in 1814).[4] Most impressive was Diana, which penetrated 400 miles up the Irrawaddy to the Burmese imperial capital at Amarapura: “she towed sailing ships into position, transported troops, reconnoitered advance positions, and bombarded Burmese fortifications with her swivel guns and Congreve rockets.”[5] She also captured the Burmese warships, who could not outrun her and whose small cannons on fixed mounts could not effectively put fire on her either. A depiction of an attack on Burmese fortifications by the British fleet. The steamship Diana is at right. In the Burmese war, however, steamships had served as the supporting cast. In the First Opium War, the steamship Nemesis took a star turn. The East India Company traditionally made its money by bringing the goods of the East—mainly tea, spices, and cotton cloth—back west to Europe. In the nineteenth century, however, the directors had found an even more profitable way to extract money from their holdings in the subcontinent: by growing poppies and trading the extracted drug even further east, to the opium dens of China. The Qing state, understandably, grew to resent this trade that immiserated its citizens, and so in 1839 the emperor promulgated a ban on the drug. The iron-hulled Nemesis was built and dispatched to China by the EIC with the express purpose of carrying war up China’s rivers. Shemounted a powerful main battery of twin swivel-mount 32-pounders and numerous smaller weapons, and with a shallow draft was able to navigate not just up the Pearl River, but into the shallow waterways around Canton (Guangzhou), destroying fortifications and ships and wreaking general havoc. Later Nemesis and several other steamers, towing other battleships, brought British naval power 150 miles up the Yangtze to its junction with the Grand Canal. The threat to this vital economic lifeline brought the Chinese government to terms.[6] Nemesis and several British boats destroying a fleet of Chinese junks in 1841. Steamboats continued to serve in imperial wars throughout the nineteenth century. A steam-powered naval force dispatched from Hong Kong helped to break the Indian Rebellion of 1857. Steamers supplied Herbert Kitchener’s 1898 expedition up the Nile to the Sudan, with the dual purpose of avenging the death of Charles “Chinese” Gordon fourteen years earlier and of preventing the French from securing a foothold on the Nile. His steamboat force consisted of a mix of naval gunboats and a civilian ship requisitioned from the ubiquitous Cook & Son tourism and logistics firm.[7] Kitchener could only dispatch such an expedition because of the British power base in Cairo (from whence it ruled Egypt through a puppet khedive), and that power base existed for one primary reason: to protect the Suez Canal. The Geography of Steam: Suez In 1798, Napoleon’s army of conquest, revolution, and Enlightenment arrived in Egypt with the aim of controlling the Eastern half of the Mediterranean and cutting off Britain’s overland link to India. There they uncovered the remnants of a canal linking the Nile Delta to the Red Sea. Constructed in antiquity and restored several times after, it had fallen into disuse sometime in the medieval period. It’s impossible to know for certain, but when operable, this canal had probably served as a regional waterway connecting the Egyptian heartland around the Nile with the lands around the head of the Red Sea. By the eighteenth century, in an age of global commerce and global empires, however, a nautical connection between the Mediterranean and Red Sea had more far-reaching implications.[8] A reconstruction of the possible location of the ancient Nile-Suez canal. [Picture by Annie Brocolie / CC BY-SA 2.5] Napoleon intended to restore the canal, but before any work could commence, France’s forces in Egypt withdrew in the face of a sustained Anglo-Ottoman assault. Though British commercial and imperial interests presented a far stronger case for a canal than any benefits France might have hoped to get from it, the British government fretted about upsetting the balance of power in the Middle East and disrupting their textile industry’s access to the Egyptian cotton cloth. They contented themselves instead with a cumbrous overland route to link the Red Sea and the Mediterranean. Meanwhile, a series of French engineers and diplomats, culminating in Ferdinand de Lesseps, pressed for the concession required to build a sea-to-sea Suez Canal, and construction under French engineers finally began in 1861. The route formally opened in November, 1869 in a grand celebration that attracted most of the crowned heads of continental Europe.[9] It was just as well that the project was delayed: it allowed for the substitution, in 1865, of steam dredges for conscripted labor at the work site. Of the hundred million cubic yards of earth excavated for the canal, four-fifths were dug out with iron and steam rather than muscle, generating 10,000 horsepower at the cost of £20,000 of coal per month.[10] Without mechanical aid, the project would have dragged on well into the 1870s, if it were completed at all. Moreover, Napoleon’s precocious belief in the project notwithstanding, the canal’s ultimate fiscal health depended of the existence of ocean-going steamships as well. By sail, depending on the direction of travel and the season, the powerful trade winds on the southern route could make it the faster option, or at least the more efficient one given the tolls on the canal.[11] But for a steamship, the benefits of cutting off thousands of miles from the journey were three-fold: it didn’t just save time, it also saved fuel, which in turn freed more space for cargo. Given the tradeoffs, as historian Max Fletcher wrote, “[a]lmost without exception, the Suez Canal was an all-steamer route.”[12] The modern Suez Canal, with the Mediterranean Sea on the left and the Red Sea on the right. [Picture by Pierre Markuse / CC BY 2.0] Ironically, the British, too conservative in their instincts to back the canal project, would nonetheless derive far more obvious benefit from it than the French government or investors, who struggled to make their money back in the early years of the canal. The new canal became the lifeline to the empire in India and beyond. This new channel for the transit of people and goods was soon complemented by an even more rapid channel for the transmission of intelligence. The first great achievement of the global telegraph age was the transatlantic cable laid in 1866 by Brunel’s Great Eastern, whose cavernous bulk allowed it to lay the entire line from Ireland to Newfoundland in a single piece in 1866.[13] This particular connection served mainly commercial interests, but the Great Eastern went on to participate in the laying of a cable from Suez to Aden and on to Bombay in 1870, providing relatively instantaneous electric communication (modulo a few intermediate hops) from London to its most precious imperial possession.[14] The importance of the Suez for quick communications with India in turn led to further aggressive British expansion in 1882: the bombarding of Alexandria and the de facto conquest of an Egypt still nominally loyal to the Sultan in Istanbul. This was not the only such instance. Steam power opened up new ways for empires to exert their might, but also pulled them to new places sought out only because steam power itself had made them important. The Geography of Steam: Coaling Stations In that vein, coaling stations—coastal and island stations for restocking ships with fuel—became an essential component of global empire. In 1839, the British seized the port of Aden (on the gulf of the same name) from the Sultan of Lahej for exactly that purpose, to serve as a coaling station for the steamers operating between the Red Sea and India.[15] Other, pre-existing waystations waxed or waned in importance along with the shift from the geography of sail to that of steam. St. Helena in the Atlantic, governed by the East India Company since the 1650s, could only be of use to ships returning from Asia in the age of sail, due to the prevailing trade winds that pushed outbound ships towards South America. The advent of steam made an expansion of St. Helena’s role possible, but then the opening of Suez diverted traffic away from the South Atlantic altogether. The opening of the Panama Canal similarly eclipsed the Falkland Islands’ position as the gateway to the Pacific.[16] In the case of shore-bound stations such as Aden, the need to protect the station itself sometimes led to new imperial commitments in its hinterlands, pulling empire onward in the service of steam. Aden’s importance only multiplied with the opening of the Suez Canal, which now made it part of the seven-thousand-mile relay system between Great Britain and India. Aggressive moves by the Ottoman Empire seemed to imperil this lifeline, and so the existence of the station became the justification for Britain to create a protectorate (a collection of vassal states, in effect) over 100,000 square miles of the Arabian Peninsula.[17] Britain created the 100,000-square-mile Aden protectorate to safeguard its steamship route to India. Coaling stations acquired local coal where it was available—from North America, South Africa, Bengal, Borneo, or Australia—where it was not, it had to be brought in, ironically, by sailing ships. But although one lump of coal may seem as good as another, it was not, in fact, a single fungible commodity. Each seam varied in the ratio and types of chemical impurities it contained, which affected how the coal burned. Above all, the Royal Navy was hungry for the highest quality coal. By the 1850s, the British Admiralty determined that a hard coal from the deeper layers of certain coal measures in South Wales exceeded all others in the qualities required for naval operations: a maximum of energy and a minimum of residues that would dirty engines and black smoke that would give away the position of their ships over the horizon. In 1871 the Navy launched its first all-steam oceangoing warship, the HMS Devastation, which needed, at full bore, 150 tons of this top-notch coal per day, without which it would become “the verist hulk in the navy.” The coal mines lining a series of north-south valleys along the Bristol Channel, which had previously supplied the local iron industry, thus became part of a global supply chain. The Admiralty demanded access to imported Welsh coal across the globe, in every port where the Navy refueled, even where local supplies could be found.[18] The dark green area indicates the coal seams of South Wales, where the best  steam coal in the world could be found. The British supply network far exceeded that of any other nation in its breadth and reliability, which gave their navy a global operational capacity that no other fleet could match. When the Russians sent their Baltic fleet to attack Japan in 1905, the British refused it coaling service and pressured the French to do likewise, leaving the ships reliant on sub-par German supplies. It suffered repeated delays and quality shortfalls in its coal before meeting its grim fate in Tsushima Strait. Aleksey Novikov-Priboi, a sailor on one of the Russian ships, later wrote that “coal had developed into an idol, to which we sacrificed strength, health, and comfort. We thought only in terms of coal, which had become a sort of black veil hiding all else, as if the business of the squadron had not been to fight, but simply to get to Japan.”[19] Even the rising naval power of the United States, stoked by the dreams of Alfred Mahan, could scarcely operate outside its home waters without British sufferance. The proud Great White Fleet of the United States that circumnavigated the globe to show the flag found itself repeatedly humbled by the failures of its supply network, reliant on British colliers or left begging for low-quality local supplies.[20] But if British steam power on the oceans still outshone that of the U.S. even beyond the turn of the twentieth century, on land it was another matter, as we shall next time.

Read more
America’s Steam Empire

[Apologies for the long delay on this one, a combination of writer’s block and a house move slowed me down this summer. Hopefully the next installment will follow more rapidly!] Railroads and Continental Power The Victorian Era saw the age of steam at its flood tide. Steam-powered ships could decide the fate of world affairs, a fact that shaped empires around the demands of steam, and that made Britain the peerless powerof the age. But steam created or extended commercial and cultural networks as well as military and political ones. Faster communication and transportation allowed imperial centers to more easily project power, but it also allowed goods and ideas to flow more easily along the same links. Arguably, it was more often commercial than imperial interests that drove the building of steamships, the sinking of cables and the laying of rail, although in many cases the two interests were so entangled that they can hardly be separated: the primary attraction of an empire, after all (other than prestige) lay in the material advantages to be extracted from the conquered territories. The growth of the rail system in the United States provides a case study in this entanglement. While British commercial and imperial power derived from its command of the oceans, America drew strength from the continental scale of its dominions. Steamboats had gone some way to making the vast interior more accessible, and played a supporting role in the wars that wrested control of the continent from the Native American nations and Mexico. A steam-powered fleet raided Mexican ports and helped seize a coastal base for the Army at Vera Cruz in 1847, but the Army then had to march hundreds of miles overland to capture Mexico City, supplied by pack mules. Likewise, steamboats delivered troops and supplied firepower in the numerous Indian Wars of the nineteenth century, when a nearby navigable waterway existed.[1] But more often than not, the Army relied on literal horsepower.  A Steamboat on the Missouri River. The technology that did bind the continent once and for all by steam power was the railroad. The early development of rails in the U.S. recapitulated the British story, on a smaller scale and in a compressed timeframe: horse-drawn mine rails led to small local horse-drawn freight networks, which were followed in turn by intercity lines carrying a mix of passengers and freight, which then finally, gradually adopted steam locomotives as their exclusive source of rail traction. All the pieces were thus in place for a rail boom in the U.S. in the 1830s, roughly contemporaneous with the explosion of railways in Britain.[2] The American merchant class threw their money at rail projects, drawn to the new technology by avarice and driven towards it by fear. The Erie Canal was the chief symbol and author of that fear. Completed in 1825, it threatened to drain all the wealth of the West into New York City via the Great Lakes. Other leading mercantile cities on the seaboard—such as Philadelphia, Baltimore, and Charleston—risked being bypassed and left behind without a gateway to the growing population and commerce of the west. Their states reacted with grand projects to compete with New York’s.[3]   Cutting a canal of their own was one option, of course, but without an existing watercourse going in the right general direction, a feature which some cities like Baltimore entirely lacked, this would prove very difficult. The Appalachians, moreover, presented a daunting obstacle to an all-water route to the west. Tunnels could bore through high ground, locks and inclines could lift boats over it, but all at a formidable cost. And even with horse traction (which remained common throughout the 1830s), rail wagons could travel faster than a towed canal boat. So, by 1830, several railways (such as the Baltimore & Ohio, or B&O, intended to link the city to the river of that name, though it would take over two decades to do so) began to stretch westward. This twentieth-century relief map shown the route of the Baltimore and Ohio railroad gives a sense of the daunting geography that had to be dealt with. [George P. Grimsley, “The Baltimore & Ohio Railroad,” XVI International Geological Congress (Washington: 1933)] Some cities that had already launched canal companies switched over to rail as events in Britain made the practicability of the technology clear. Pennsylvania, despite having already invested heavily in canals, abandoned a plan to connect the Delaware and Susquehanna by canal based on intelligence from England. William Strickland, a disciple of Henry Latrobe who visited England in 1825 to learn about the latest developments in transportation, advised the government that railroads were the future, so they instead backed an eighty-two-mile railroad from Philadelphia to Columbia.[4] In the early years, American rail technology depended heavily on engineers like Strickland who had traveled to Britain to learn about locomotive and railroad design. The first major rail lines in Massachusetts, New Jersey, Pennsylvania, and Maryland all imitated the techniques used to construct the Liverpool and Manchester line in England.[5] To the extent that these early railways were steam-powered, they also relied mostly on locomotives imported from Britain or modeled on British exemplars. Many early American locomotives either came straight from the workshop of George and Robert Stephenson in Newcastle, or copied the design of the Stephensons’ Samson or Planet locomotives.[6] Old Ironsides, the first locomotive built by Philadelphia manufacturer Mathias Baldwin. It is a near-exact copy of the Stephensons’ Planet. Three factors gradually shunted American railroad technology off onto a different track from that of its British forebears: the presence of the Appalachians, the relative dearth of capital and labor west of the Atlantic, and the abundance there of cheap land and timber. The dominant railway pattern in Britain consisted of heavily graded routes made as flat and straight as possible, with gentle curves that both kept the locomotive and wagons secure on the tracks and minimized the cost of land acquisition. They were built to last, with bridges and viaducts constructed of sturdy stone and iron.[7] The same kind of construction could be found in the early railways on the eastern seaboard: the Thomas Viaduct, for example, on the B&O line, spanned (and still spans) the Patapsco River on arches of solid masonry. The Thomas Viaduct, typical of the British style in early American railroad design. But American builders could not afford to take the same approach as they moved westward, crossing the high mountains and vast distances required to reach the small towns of the Ohio valley and other points west. The United States for the most part still embodied the Jeffersonian ideal of a rural, agrarian society, and especially so in the west, where only 7% or so of the population lived in towns. Larger cities with a wealthy merchant class, a robust banking system, and capital to spare existed only on the coasts.[8] A scrappier approach would be needed to make railways work in this context. Cheap construction trumped all other factors. In the early years, builders frequently resorted to flimsy “strap-iron” rails, consisting of a thin veneer of iron nailed to a wooden rail. They avoided expensive tunnelling or levelling operations to cross hills or mountains in favor of steeper gradients and tighter curves: by 1850, the U.S. had dug only eleven miles of railway tunnels compared to eighty in Britain, despite having several times Britain’s total track mileage by that point, much of which crossed mountainous terrain. As rails moved westward, American rail builders figured out how to construct bridges of timber trusses, a material readily available in the heavily wooded Ohio Valley, rather than iron or heavy stone construction like the Thomas Viaduct.[9] A wooden trestle bridge over the Genesee River in New York, more typical of the fully developed style of American railroad building. The steep grades and sharp curves of American railways required changes to locomotive design: more powerful engines to haul loads up steeper slopes, and swiveling wheels for navigating turns without derailing. In 1832, John B. Jervis, chief engineer for New York’s Mohawk and Hudson Railroad, devised a four-wheeled truck for the front of his locomotive, which could rotate independently of the main carriage, allowing the locomotive to turn through might tighter angles. Other builders quickly copied the idea. Matthias Baldwin of Philadelphia, who went on to become the most prolific builder of American locomotives, had modeled his first (1831) locomotive on the Stephenson Planet. By 1834, however, he had developed a new design that incorporated Jervis’ bogie, a design that he would sell by the dozen over the next decade.[10] A few years later, a competing Philadelphia locomotive builder, Joseph Harrison Jr., developed the equalizing beam to distribute the weight of the vehicle evenly over multiple axles. This opened the way to locomotives with four or more driving wheels, providing the power needed to ascend mountain grades.[11] Baldwin’s 1834 Lancaster. Note that the front four wheels can swivel independently of the rear, drive wheels. Iron Rivers One of the defining processes of modern times has been the decoupling of humanity from the cycles and contours of the natural world, contours and cycles that shaped its existence for millennia. Steam power, as we have seen before, abetted this process by providing a free-floating source of mechanical power, using energy “cheated” from nature by drawing down reserves of carbonaceous matter stored up for eons underground. The course of rivers and streams, which had guided human settlement since humans began settling, provide a case in point. A river provides a source of drinking water and a natural sewer, but also a highway for travel and trade. Since before recorded history, people had moved bulk goods (such as food, fodder, fuel, timber, and ore) mainly by water. The steamship allowed people to exploit such waterways more intensively, but then rail lines appeared and extended existing watersheds, acting as new tributaries. Finally, the main-line railroads that emerged by mid-century created artificial iron rivers, entirely independent of water, draining goods from their catchment area out to a major commercial hub where they might find a buyer.[12] As these rails reached westward in the United States, they also drained the life out of the steamboating trade, which faded to a shadow of its former self. Trains ran several times faster, followed the straightest course possible from town to town, and—unaffected by drought, flood, or freeze—operated year-round in virtually any weather.[13] Efficient, reliable, and immune from the whims and cycles of nature, they were modernity incarnate. As Mark Twain reflected in 1883, on revisiting St. Louis for the first time in decades: …the change of changes was on the ‘levee.’ …Half a dozen sound-asleep steamboats where I used to see a solid mile of wide-awake ones! This was melancholy, this was woeful. The absence of the pervading and jocund steamboatman from the billiard-saloon was explained. He was absent because he is no more. His occupation is gone, his power has passed away, he is absorbed into the common herd, he grinds at the mill, a shorn Samson and inconspicuous.[14] By the 1880s major riverfront cities such as Cincinnati and Louisville, cities that owed their existence to the Ohio river trade, cities molded by the millennia-old pattern of waterborne commerce, spurned the natural highway that lay at their feet. They shipped out some 95% of their goods—from cotton and tobacco to ham and potatoes—by rail.[15] The steamboat had clearly lost out. But in the long run, none of the also-ran cities of the eastern seaboard—such as Baltimore, Philadelphia and Charleston—gained much on their peers from their investments in the railroad, either. New York continued to dominate them all. Instead, the biggest winner of the dawning American rail age emerged at the junction of the new iron rivers of the Midwest; a vast new metropolis was rising from the mudflats of the Lake Michigan shoreline on the back of the railroad. Player With Railroads Rivers and harbors had given life to many a great metropolis over the millennia; Chicago was the first to be quickened by rails. Not that water had nothing to do with it: Chicago’s small river ran close to the watershed of the Illinois River, giving it huge potential as a water link that could connect shipping flows on the Mississippi River system to the Great Lakes (and thus, via the Erie Canal, New York, the commercial nexus of the entire country). In the 1830s, Chicago was still a muddy little trading entrepot, its hinterlands recently wrested from the Potawatomi Indians, but a speculative real estate bubble took off on the assumption that it would explode in importance once a canal was built to connect the two water systems.[16] That bubble collapsed with the crash of 1837, and the hoped-for canal did not finally appear until April 1848, with the help of the state and federal government.[17] By that time, the first of the railroads that would soon overshadow the canal in economic and cultural importance had already begun construction. The Galena and Chicago Union was overseen by Chicago bigwigs, but funded mainly by farmers along the proposed route, who opened their pockets in the (justified) belief that a railroad would drive up the value of their crops and their lands. By the start of the Civil War, The Galena and Chicago formed just one part of a vascular system of rails fanning out from Chicago across Illinois and southern Wisconsin to various points on the Mississippi—Galena to the northwest, Rock Island west, and Quincy southwest—that brought farm produce from the hinterlands into the city and returned with manufactured goods—like the new, Chicago-made, McCormick Reaper. Chicago’s railroads circa 1866. The lines fanning out to the west (such as the Chicago & North Western and Chicago, Rock Island & Pacific) connected Chicago to the natural resources of the Midwest. The trunk lines along Lake Michigan (Pittsburgh, Fort Wayne and Chicago; Lake Shore & Michigan Southern) connected it to the markets of the East. [David Buisseret, Historic Illinois from the Air (Chicago: University of Chicago Press, 1990), p.135] These lines formed the first of two different “railsheds” that served Chicago. The other, owned and operated mostly by eastern capital, consisted of a series of parallel trunk lines that formed an arterial connection to the cities of the east, especially New York. The base of Lake Michigan—a barrier, rather than a highway, from the point of view of the railroads—served as a choke point that brought both of these rail systems to Chicago. Competition among the various entering lines (and, in the ice-free months, with lake traffic for bulk goods), kept rates low and furthered Chicago’s advantages. The western rail system gathered in the products of the plains and prairies of the West–grain, livestock, and timber—while the eastern system disgorged it en masse to hungry markets. The city in between served as middleman, market maker, processor, storehouse, and more: “Hog Butcher for the World,\Tool Maker, Stacker of Wheat,\Player with Railroads and the Nation’s Freight Handler.”[18] Chicago’s rival as gateway to the West, St. Louis, had long served as the concentration point for goods flowing from the territory north and west of it along the Mississippi and Missouri rivers, the former stomping grounds of Lewis and Clark. But as the Chicago railroads reached the Mississippi, they siphoned that traffic off to the east, starving St. Louis of commercial sustenance. An 1870s rendition of the Union Stock Yards of Chicago, where the livestock of the plains became meat. The area is now an industrial park. The rivermen fought a brief rear guard action in the mid-1850s: they tried to block the railroads from spreading further west by having the Chicago and Rock Island bridge across the Mississippi declared a hazard to navigation in 1857. Future president Abraham Lincoln traveled to Chicago to spearhead the case for the defense, and secured a hung jury, which was, practically speaking, a victory for the railroad interests.[19] Towns like Omaha, Nebraska, which might have naturally oriented their trade downriver to Missouri, now looked east. As one correspondent reported circa 1870, “Omaha eats Chicago groceries, wears Chicago dry goods, builds with Chicago lumber, and reads with Chicago newspapers. The ancient store boxes in the cellar have ‘St. Louis’ stenciled on them; those on the pavement, ‘Chicago.’”[20] St. Louis was not the only party to suffer from the westward expansion of the railroads, however, and its fate was farfrom the bleakest. Annihilating Distance In the Mexican-American war of 1846-1848, the United States had acquired vast new territories in the West, including Alta (upper) California, on the Pacific coast. Then, shortly thereafter, James Marshall found flecks of gold in the waters of the sawmill he had established in the hills east of Sutter’s Fort, site of the future city of Sacramento. Word of wealth running in the streams drew the desperate, foolish, and cunning to the new territory by the hundreds of thousands. For those on the Atlantic seaboard, the fastest route to instant riches required two steamer legs in the Gulf of Mexico and the Pacific, bridged by a short but difficult crossing of the malarial isthmus of Panama; this could be done in a month or two if the steaming schedules lined up favorably. The sea journey clear round the southern tip of South America and back took two or three times as long, but avoided the risks of tropical disease. The direct landward journey offered the worst of both worlds: it took just as much time as the Cape Horn route with the added risk of death by illness or injury, along with the nagging fear of Indian attack. Only those who could not afford sea passage chose to go this way.[21] As California’s population boomed and Pacific trade began to expand, any American with a lick of avarice could see that great profit would be derived from a safer and more reliable means of reaching the Pacific, that transcontinental rail links would provide the best such means, and that—treaties and other promises notwithstanding—the natives living along the way would have to be pushed aside in the name of progress. The Kansas-Nebraska Act only left the rump of Oklahoma as ‘unorganized territory’ not (yet) claimed for the use of white settlers. That work began in earnest in the mid-1850s. The Kansas-Nebraska Act of 1854, best known for its calamitous escalation of the rising tensions over slavery that would soon engender Civil War, originated in the desire of rail promoters like senator Stephen Douglas of Illinois to open a route to the west. Douglas preferred a route through Nebraska along the flatlands of the Platte River valley, but that was treaty-bound Indian Territory, designated for tribes such as the Kickapoo, Delaware, Shawne, and others. No investors would touch a railroad company that did not pass through securely white-controlled land, and so the Indian Territory would have to give way to new American territories, Kansas and Nebraska. Those previously living there could either decamp to parts still further west or be herded into the last remnant of the Indian Territory in Oklahoma. Anyone paying attention could foresee that neither refuge would stay a refuge for long.[22] The transcontinental railroad exhibited the typical American stye of building, with wooden trestle bridges such as this one. This locomotive has four drive wheels, to provide more hauling power. The machinations involved in planning and funding the trans-continental route were extensive enough to fill entire books. The Civil War provide the crucial impetus to end the talking and start the building, because the federal government no longer needed to take Southern opinion into account in its planning. As Douglas had advocated, the route began at the junction of the Platte with the Missouri River at Omaha and stretched west across plains and mountains to Sacramento, the epicenter of the Gold Rush. Despite a handful of raids that damaged equipment or killed small parties of workers or soldiers, the Cheyenne, Sioux and other tribes that lived in the area could do little to impede the coming of the iron road, which could count on the protection of the U.S. Army. In addition to providing military cover, the government made the whole enterprise worthwhile for the railroad companies (the Central Pacific and Union Pacific) by allotting them generous land grants along the right-of-way which they could sell to farmers or borrow against directly.[23] Railroads of the Western U.S.  in 1880 [John K. Wright, ed., Atlas of the Historical Geography of the United States (Washington: Carnegie Institution, 1932),] The presence of the new rail route, along with numerous other lines that sprouted up across the West (often with land grants of their own), then accelerated further dispossession. They brought western lands into easy reach of eastern or immigrant settlers, and made those lands attractive by providing a way for those settlers to get their farm produce to market. The railroads also brought destruction to the keystone resource on which the livelihood of the equestrian tribes of the Great Plains depended. For decades, trains had carried domesticated livestock to urban slaughterhouses; the new lines across the Great Plains now made it profitable for white hunters to slaughter the bison herds of the plains in situ and then send their robes east by rail.[24] Railroad Time North America became a continent bound by steam; to the detriment of some but the great good fortune of others. By 1880, a rail traveler in Omaha could reach not just Sacramento, but also Los Angeles, Butte, Denver, Santa Fe, and El Paso. By 1890, the white population spreading along these rails had so completely covered the West that a distinct frontier of settlement ceased to exist.[25] Nothing better symbolizes the transformation of the United States into a railroad continent (not to mention the general power of steam to supplant natural cycles with those convenient to human economic activity) than the dawn of railroad time. In the early 1880s, the country’s railroad companies exercised their power to change the reckoning of time across the entire continent, and, for the most part, their change stuck. Traditionally, localities would set their clock to the local solar noon: the time when the sun stood highest in the sky. But this would not do for rail networks that spanned many stations; trains, unlike any earlier form of travel, could be scheduled to the minute, and they needed a standard time to schedule against. So, each rail company began keeping their own rail time (synched to the city where they were headquartered) which they used across all of their stations: in April 1883, forty-nine distinct railroad times existed in the United States. [26] In that same month, William F. Allen, a railroad engineer, put forth a proposal to a convention of railroad managers to standardize the entire U.S. rail system on a series of hour-wide time zones. This would satisfy various pressures: from scientists for a system of time they could use to align measurements across the country and the globe, from state governments for more uniform time standards, and from travelers for easier-to-understand timetables. Britain had already adopted Greenwich Mean Time as their national time for similar reasons (and by a similar process – it had begun as a country-wide railway time in 1847 before being adopted by the government in 1880). The companies duly implemented the system in November 1883, and by March of the next year, most of the major cities in the U.S. had adjusted their clocks to conform to the new railroad time system.[27]   As shown in this map from 1884, the railroad time system does not correspond exactly to the modern U.S. time zones (adopted by federal law in 1918), but it is recognizably similar. We have, by now, wandered a good way down the stream, exploring the consequences of the steamboat and locomotive, the most romantic and striking symbols of the age of steam. At this point we must make our way back up to the central channel of our story, resuming the story of the development of the technology of the steam engine, the prime mover itself.

Read more
High-Pressure, Part I: The Western Steamboat

The next act of the steamboat lay in the west, on the waters of the Mississippi basin. The settler population of this vast region—Mark Twain wrote that “the area of its drainage-basin is as great as the combined areas of England, Wales, Scotland, Ireland, France, Spain, Portugal, Germany, Austria, Italy, and Turkey”—was already growing rapidly in the early 1800s, and inexpensive transport to and from its interior represented a tremendous economic opportunity.[1] Robert Livingston scored another of his political coups in 1811, when he secured monopoly rights for operating steamboats in the New Orleans Territory. (It did not hurt his cause that he himself had negotiated the Louisiana Purchase, nor that his brother Edward was New Orleans’ most prominent lawyer.) The Fulton-Livingston partnership built a workshop in Pittsburgh to build steamboats for the Mississippi trade. Pittsburgh’s central position at the confluence of Monangahela and Allegheny made it a key commercial hub in the trans-Appalachian interior and a major boat-building center. Manufactures made there could be distributed up and down the rivers far more easily than those coming over the mountains from the coast, and so factories for making cloth, hats, nails, and other goods began to sprout up there as well.[2] The confluence of river-based commerce, boat-building and workshop know-how made Pittsburgh the natural wellspring for western steamboating. Figure 1: The Fulton-Livingston New Orleans. Note the shape of the hull, which resembles that of a typical ocean-going boat. From Pittsburgh, The Fulton-Livingston boats could ride downstream to New Orleans without touching the ocean. The New Orleans, the first boat launched by the partners, went into regular service from New Orleans to Natchez (about 175 miles to the north) in 1812, but their designs—upscaled versions of their Hudson River boats—fared poorly in the shallow, turbulent waters of the Mississippi. They also suffered sheer bad luck: the New Orleans grounded fatally in 1814, the aptly-named Vesuvius burnt to the waterline in 1816 and had to be rebuilt. The conquest of the Mississippi by steam power would fall to other men, and to a new technology: high-pressure steam. Strong Steam A typical Boulton & Watt condensing engine was designed to operate with steam below the pressure of the atmosphere (about fifteen pounds per square inch (psi)). But the possibility of creating much higher pressures by heating steam well above the boiling point was known for well over a century. The use of so-called “strong steam” dated back at least to Denis Papin’s steam digester from the 1670s. It even had been used to do work, in pumping engines based on Thomas Savery’s design from the early 1700s, which used steam pressure to push water up a pipe. But engine-builders did not use it widely in piston engines until well into the nineteenth century. Part of the reason was the suppressive influence of the great James Watt. Watt knew that expanding high-pressure steam could drive a piston, and laid out plans for high-pressure engines as early as 1769, in a letter to a friend: I intend in many cases to employ the expansive force of steam to press on the piston, or whatever is used instead of one, in the same manner as the weight of the atmosphere is now employed in common fire-engines. In some cases I intend to use both the condenser and this force of steam, so that the powers of these engines will as much exceed those pressed only by the air, as the expansive power of the steam is greater than the weight of the atmosphere. In other cases, when plenty of cold water cannot be had, I intend to work the engines by the force of steam only, and to discharge it into the air by proper outlets after it has done its office.[3] But he continued to rely on the vacuum created by his condenser, and never built an engine worked “by the force of steam only.” He went out of his way to ensure that no one else did either, deprecating the use of strong steam at every opportunity. There was one obvious reason why: high-pressure steam was dangerous. The problem was not the working machinery of the engine but the boiler, which was apt to explode, spewing shrapnel and superheated steam that could kill anyone nearby. Papin had added a safety valve to his digester for exactly this reason. Savery steam pumps were also notorious for their explosive tendencies. Some have imputed a baser motive for Watt’s intransigence: a desire to protect his own business from high-pressure competition. In truth, though, high-pressure boilers did remain dangerous, and would kill many people throughout the nineteenth century. Unfortunately, the best material for building a strong boiler was the most difficult from which to actually construct one. By the beginning of the nineteenth century copper, lead, wrought iron, and cast iron had all been tried as boiler materials, in various shapes and combinations. Copper and lead were soft, cast iron was hard, but brittle. Wrought iron clearly stood out as the toughest and most resilient option, but it could only be made in ingots or bars, which the prospective boilermaker would then have to flatten and form into small plates, many of which would have to be joined to make a complete boiler. Advances in two fields in the decades around 1800 resolved the difficulties of wrought iron. The first was metallurgical. In the late eighteenth century, Henry Cort invented the “puddling” process of melting and stirring iron to oxidize out the carbon, producing larger quantities of wrought iron that could be rolled out into plates of up to about five feet long and a foot wide.[4] These larger plates still had to be riveted together, a tedious and error-prone process, that produced leaky joints. Everything from rope fibers to oatmeal was tried as a caulking material. To make reliable, steam-tight joints required advances in machine tooling. This was a cutting-edge field at the time (pun intended). For example, for most of history craftsmen cut or filed screws by hand. The resulting lack of consistency meant that many of the uses of screws that we take for granted were unknown: one could not cut 100 nuts and 100 bolts, for example, and then expect to thread any pair of them together. Only in the last quarter of the eighteenth centuries did inventors craft sufficiently precise screw-cutting lathes to make it possible to repeatedly produce screws with the same length and pitch. Careful use of tooling similarly made it possible to bore holes of consistent sizes in wrought iron plates, and then manufacture consistently-sized rivets to fit into them, without the need to hand-fit rivets to holes.[5] One could name a few outstanding early contributors to the improvement of machine tooling in the first decades of the nineteenth century Arthur Woolf in Cornwall, or John Hall at the U.S. Harper’s Ferry Armory. But the steady development of improvements in boilers and other steam engine parts also involved the collective action of thousands of handcraft workers. Accustomed to building liquor stills, clocks, or scientific instruments, they gradually developed the techniques and rules of thumb needed for precision metalworking for large machines.[6] These changes did not impress Watt, and he stood by his anti-high-pressure position until his death in 1819. Two men would lead the way in rebelling against his strictures. The first appeared in the United States, far from Watt’s zone of influence, and paved the way for the conquest of the Western waters. Oliver Evans Oliver Evans was born in Delaware in 1755. He first honed his mechanical skills as an apprentice wheelwright. Around 1783, he began constructing a flour mill with his brothers on Red Clay Creek in northern Delaware. Hezekiah Niles, a boy of six, lived nearby. Niles would become the editor of the most famous magazine in America, from which post he later had occasion to recount that “[m]y earliest recollections pointed him out to me as a person, in the language of the day, that ‘would never be worth any thing, because he was always spending his time on some contrivance or another…’”[7] Two great “contrivances” dominated Evans’ adult life. The challenges of the mill work at Red Clay Creek led to his first great idea:  an automated flour mill. He eliminated most of the human labor from the mill by linking together the grain-processing steps with a series of water-powered machines (the most famous and delightfully named being the “hopper boy”). Though fascinating in its own right, for the purposes of our story the automated mill only matters in so far as it generated the wealth which allowed him to invest in his second great idea: an engine driven by high-pressure steam. Figure 2: Evans’ automated flour mill. In 1795, Evans published an account of his automatic mill entitled The Young Mill-Wright and Miller’s Guide. Something of his personality can be gleaned from the title of his 1805 sequel on the steam engine: The Abortion of the Young Steam Engineer’s Guide. A bill to extend the patent on his automatic flour mill failed to pass Congress in 1805, and so he published his Abortion as a dramatic swoon, a loud declaration that, in response this rebuff, he would be taking his ball and going home: His [i.e., Evans’] plans have thus proved abortive, all his fair prospects are blasted, and he must suppress a strong propensity for making new and useful inventions and improvements; although, as he believes, they might soon have been worth the labour of one hundred thousand men.[8] Of course, despite these dour mutterings, he failed entirely to suppress his “strong propensity,” in fact he was in the very midst of launching new steam engine ventures at this time. Like so many other early steam inventors, Evans’ interest in steam began with a dream of a self-propelled carriage. The first tangible evidence that we have of his interest in steam power comes from patents he filed in 1787 which included mention of a “steam-carriage, so constructed to move by the power of steam and the pressure of the atmosphere, for the purpose of conveying burdens without the aid of animal force.” The mention of “the pressure of the atmosphere” is interesting—he may have still been thinking of a low-pressure Watt-style engine at this point.[9] By 1802, however, Evans had a true high-pressure engine of about five horsepower operating at his workshop at Ninth and Market in Philadelphia. He had established himself in that city in 1792, the better to promote his milling inventions and millwright services. He attracted crowds to his shop with his demonstration of the engine at work: driving a screw mill to pulverize plaster, or cutting slabs of marble with a saw. Bands of iron held reenforcing wooden slats against the outside of the boiler, like the rim of a cartwheel or the hoops of a barrel. This curious hallmark testified to Evans’ background as a millwright and wheelwright [10] The boiler, of course, had to be as strong as possible to contain the superheated steam, and Evans’ later designs made improvements in this area. Rather than the “wagon” boiler favored by Watt (shaped like a Conestoga wagon or a stereotypical construction worker’s lunchbox), he used a cylinder. A spherical boiler being infeasible to make or use, this shape distributed the force of the steam pressure as evenly as practicable over the surface. In fact, Evans’ boiler consisted of two cylinders in an elongated donut shape, because rather than placing the furnace below the boiler, he placed it inside, to maximize the surface area of water exposed to the hot air. By the time of the Steam Engineer’s Guide, he no longer used copper braced with wood, he now recommended the “best” (i.e. wrought) iron “rolled in large sheets and strongly riveted together. …As cast iron is liable to crack with the heat, it is not to be trusted immediately in contact with the fire.”[11] Figure 3: Evan’s 1812 design, which he called the Columbian Engine to honor the young United States on the outbreak of the War of 1812. Note the flue carrying heat through the center of the boiler, the riveted wrought iron plates of the boiler, and the dainty proportions of the cylinder, in comparison to that of a Newcomen or Watt engine. Pictured in the corner is the Orukter Amphibolos. Evans was convinced of the superiority of his high-pressure design because of a rule of thumb that he had gleaned from the article “Steam” in the American edition of the Encylopedia Britannica: “…whatever the present temperature, an increase of 30 degrees doubles the elasticity and the bulk of water vapor.”[12] From this Evans concluded that heating steam to twice the boiling point (from 210 degrees to 420), would increase its elastic force by 128 times (since a 210 degree increase in temperature would make seven doublings). This massive increase in power would require only twice the fuel (to double the heat of the steam). None of this was correct, but it would not be the first or last time that faulty science would produce useful technology.[13] Nonetheless, the high-pressure engine did have very real advantages. Because the power generated by an engine was proportional to the area of the piston times the pressure exerted on that piston, for any given horsepower, a high-pressure engine could be made much smaller than its low-pressure equivalent. A high-pressure engine also did not require a condenser: it could vent the spent steam directly into the atmosphere. These factors made Evans’ engines smaller, lighter, and simpler and less expensive to build. A non-condensing high-pressure engine of twenty-four horsepower weighed half a ton and had a cylinder nine-inches across. A traditional Boulton & Watt style engine of the same power had a cylinder three times as wide and weighed four times as much overall.[14]   Such advantages in size and weight would count doubly for an engine used in a vehicle, i.e. an engine that had to haul itself around. In 1804 Evans sold an engine that was intended to drive a New Orleans steamboat, but it ended up in a sawmill instead. This event could serve as a metaphor for his relationship to steam transportation. He declared in his Steam Engineer’s Guide that: The navigation of the river Mississippi, by steam engines, on the principles here laid down, has for many years been a favourite object with the author and among the fondest wishes of his heart. He has used many endeavours to produce a conviction of its practicability, and never had a doubt of the sufficiency of the power.[15]   But steam navigation never got much more than his fondest wishes. Unlike a Fitch or a Rumsey, the desire to make a steamboat did not dominate his dreams and waking hours alike. By 1805, he was a well-established man of middle years. If he had ever possessed the Tookish spirit required for riverboat adventures, he had since lost it. He had already given up on the idea of a steam carriage, after failing to sell the Lancaster Turnpike Company on the idea in 1801. His most grandiosely named project, the Orukter Amphibolos, may briefly have run on wheels en route to serve as a steam dredge in the Philadelphia harbor. If it functioned at all, though, it was by no means a practical vehicle, and it had no sequel. Evans’ attention had shifted to industrial power, where the clearest financial opportunity lay—an opportunity that could be seized without leaving Philadelphia. Despite Evans’ calculations (erroneous, as we have said), a non-condensing high-pressure engine was somewhat less fuel-efficient than an equivalent Watt engine, not more. But because of its size and simplicity, it could be built at half the cost, and transported more cheaply, too. In time, therefore, the Evans-style engine became very popular as a mill or factory engine in the capital- and transportation-poor (but fuel-rich) trans-Appalachian United States.[16] In 1806, Evans began construction on his “Mars Works” in Philadelphia, to serve market for engines and other equipment. Evans engines sprouted up at sawmills, flour mills, paper factories, and other industrial enterprises across the West. Then, in 1811, he organized the Pittsburgh Steam Engine Company, operated by his twenty-three-year-old son George, to reduce transportation costs for engines to be erected west of the Alleghenies.[17] It was around that nexus of Pittsburgh that Evans’ inventions would find the people with the passion to put them to work, at last, on the rivers. The Rise of the Western Steamboat The mature Mississippi paddle steamer differed from its Eastern antecedents in two main respects. First, in its overall shape and layout: a roughly rectangular hull with a shallow draft, layer cake decks, and machinery above the water, not under it. This design was better adapted to an environment where snags and shallows presented a much greater hazard than waves and high winds. Second, in the use of a high-pressure engine, or engines, with a cylinder mounted horizontally along the deck. Many historical accounts attribute both of these essential developments to a keelboatman named Henry Miller Shreve. Economic historian Louis Hunter effectively demolished this legend in the 1940s, but more recent writers (for example Shreve’s 1984 biographer, Edith McCall), have continued to perpetuate it. In fact, no one can say with certainty where most of these features came from because no one bothered to document their introduction. As Hunter wrote: From the appearance of the first crude steam vessels on the western waters to the emergence of the fully evolved river steamboat a generation later, we know astonishingly little of the actual course of technological events and we can follow what took place only in its broad outlines. The development of the western steamboat proceeded largely outside the framework of the patent system and in a haze of anonymity.[18] Some documents came to light in the 1990s, however, that have burned away some of the “haze,” with respect to the introduction of high-pressure engines.[19] the papers of Daniel French reveal that the key events happened in a now-obscure place called Brownsville (originally known as Redstone), about forty miles up the Monongahela from that vital center of western commerce, Pittsburgh. Brownsville was the point where anyone heading west on the main trail over the Alleghenies—which later became part of the National Road—would first reach navigable waters in the Mississippi basin. Henry Shreve grew up not far from this spot. Born in 1785 to a father who had served as a Colonel in the Revolutionary War, he grew up on a farm near Brownsville on land leased from Washington: one of the general’s many western land-development schemes.[20] Henry fell in love with the river life, and in by his early twenties had established himself with his own keelboat operating out of Pittsburgh. He made his early fortune off the fur trade boom in St. Louis, which took off after Lewis and Clark returned with reports of widespread beaver activity on the Missouri River.[21] In the fall of 1812, a newcomer named Daniel French arrived in Shreve’s neighborhood—a newcomer who already had experience building steam watercraft, powered by engines based on the designs of Oliver Evans. French was born in Connecticut 1770, and started planning to build steamboats in his early 20s, perhaps inspired by the work of Samuel Morey, who operated upstream of him on the Connecticut River. But, discouraged from his plans by the local authorities, French turned his inventive energies elsewhere for a time. He met and worked with Evans in Washington, D.C., to lobby Congress to extend the length of patent grants, but did not return to steamboats until Fulton’s 1807 triumph re-energized him. At this point he adopted Evans’ high-pressure engine idea, but added his own innovation, an oscillating cylinder that pivoted on trunions as the engine worked. This allowed the piston shaft to be attached to the stern wheel with a simple (and light) crank, without any flywheel or gearing. The small size of the high-pressure cylinder made it feasible to put the cylinder in motion. In 1810, a steam ferry he designed, for a route from Jersey City to Manhattan, successfully crossed and recrossed the North (Hudson) River at about six miles per hour. Nonetheless, Fulton, who still held a New York state monopoly, got the contract from the ferry operators.[22] French moved to Philadelphia and tried again, constructing the steam ferry Rebecca to carry passengers across the Delaware. She evidently did not produce great profits, because a frustrated French moved west again in the fall of 1812, to establish a steam-engine-building business at Brownsville.[23] His experience with building high-pressure steamboats—simple, relatively low-cost, and powerful—had arrived at the place that would benefit most from those advantages, a place, moreover, where the Fulton-Livingston interests held no legal monopoly. News about the lucrative profits of the New Orleans on the Natchez run had begun to trickle back up the rivers. This was sufficient to convince the Brownsville notables—Shreve among them—to put up $11,000 to form the Monongahela and Ohio Steam Boat Company in 1813, with French as their engineer. French had their first boat, Enterprise, ready by the spring of 1814. Her exact characteristics are not documented, but based on the fragmentary evidence, she seems in effect to have been a motorized keelboat: 60-80’ long, about 30 tons, and equipped with a twenty-horsepower engine. The power train matched that of French’s 1810 steam ferry, trunions and all.[24] The Enterprise spent the summer trading along the Ohio between Pittsburgh and Louisville. Then, in December, she headed south with a load of supplies to aid in the defense of New Orleans. For this important voyage into waters mostly unknown to the Brownsville circle, they called on the experienced keelboatman, Henry Shreve. Andrew Jackson had declared martial law, and kept Shreve and the Enterprise on military dutyin New Orleans. With Jackson’s aid, Shreve dodged the legal snares laid for him by the Fulton-Livingston group to protect their New Orleans monopoly. Then in May, after the armistice, he brough the Enterprise on a 2,000-mile ascent back to Brownsville, the first steamboat ever to make such a journey. Shreve became an instant celebrity. He had contributed to a stunning defeat for the British at New Orleans, carried out an unprecedent voyage. Moreover, he had confounded the monopolists: their attempt to assert exclusive rights over the commons of the river was deeply unpopular west of the Appalachians. Shreve capitalized on his new-found fame to raise money for his own steamboat company in Wheeling, Virginia. The Ohio at Wheeling ran much deeper than the Monongahela at Brownsvile, and Shreve would put this depth to use: he had ambitions to put a French engine into a far larger boat than the Enterprise. Spurring French to scale up his design was probably Shreve’s largest contribution to the evolution of the western steamboat. French dared not try to repeat his oscillating cylinder trick on the larger cylinder that would drive Shreve’s 100-horsepower, 400-ton two-decker. Instead, he fixed the cylinder horizontally to the hull, and then attached the piston rod to a connecting rod, or “pitman,” that drove the crankshaft of the stern paddle wheel. He thus transferred the oscillating motion from the piston to the pitman, while keeping the overall design simple and relatively low cost.[25] Shreve called his steamer Washington, after his father’s (and his own) hero. Her maiden voyage in 1817, however, was far from heroic. Evans would have assured French that the high-pressure engine carried little risk: as he wrote in the Steam Engineer’s Guide, “we know how to construct [boilers] with a proportionate strength, to enable us to work with perfect safety.”[26] Yet on her first trip down the Ohio, with twenty-one passengers aboard, the Washington’s boiler exploded, killing seven passengers and three crew. The blast threw Shreve himself into the river, but he did not suffer serious harm.[27] Ironically, the only steamboat built by the Evans family, the Constitution (née Oliver Evans) suffered a similar fate in the same year, exploding and killing eleven on board. Despite Evans’ confidence in their safety, boiler accidents continued to bedevil steamboats for decades. Though the total numbers killed was not enormous—about 1500 dead across all Western rivers up to 1848—each event provided an exceptionally grisly spectacle. Consider this lurid account of the explosion of the Constitution: One man had been completely submerged in the boiling liquid which inundated the cabin, and in his removal to the deck, the skin had separated from the entire surface of his body. The unfortunate wretch was literally boiled alive, yet although his flesh parted from his bones, and his agonies were most intense, he survived and retained all his consciousness for several hours. Another passenger was found lying aft of the wheel with an arm and a leg blown off, and as no surgical aid could be rendered him, death from loss of blood soon ended his sufferings. Miss C. Butler, of Massachusetts, was so badly scalded, that, after lingering in unspeakable agony for three hours, death came to her relief.[28] In response to continued public outcry for an end to such horrors, Congress eventually stepped in, passing acts to improve steamboat safety in 1838 and 1852. Meanwhile, Shreve was not deterred by the setback. The Washington itself did not suffer grievous damage, so he corrected a fault in the safety valves and tried again. Passengers were understandably reluctant for an encore performance, but after the Washington made national news in 1817 with a freight passage from New Orleans to just twenty-five days, the public quickly forgot and forgave. A few days later, a judge in New Orleans refused to consider a suit by the Fulton-Livingston interests against Shreve, effectively nullifying their monopoly.[29] Now all comers knew that steamboats could ply the Mississippi successfully, and without risk of any legal action. The age of the western steamboat opened in earnest. By 1820, sixty-nine steamboats could be found on western rivers, and 187 a decade after that.[30] Builders took a variety of approaches to powering these boats: low-pressure engines, engines with vertical cylinders, engines with rocking beams or fly wheels to drive the paddles. Not until the 1830s did a dominant pattern take hold, but when it did it, it was that of the Evans/French/Shreve lineage, as found on the Washington: a high-pressure engine with a horizontal cylinder driving the wheel through an oscillating connecting rod.[31] Landscape " data-medium-file="https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L" data-large-file="https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI?w=739" loading="lazy" width="1024" height="840" src="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9" alt="" class="wp-image-14432" srcset="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9 1024w, https://cdn.accountdigital.net/FrglBM_683opK7_ejlnGd3iJ8vBT 150w, https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L 300w, https://cdn.accountdigital.net/FoNGy7IhZD2VDImZqLwjumC3NT84 768w, https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI 1247w" sizes="(max-width: 1024px) 100vw, 1024px">Figure 4: A Tennessee river steamboat from the 1860s. The distinctive features include a flat-bottomed hull with very little freeboard, a superstructure to hold passengers and crew, and twin smokestacks. The western steamboat had achieved this basic form by the 1830s and maintained it into the twentieth century. The Legacy of the Western Steamboat The Western steamboat was a product of environmental factors that favored the adoption of a shallow-drafted boat with a relatively inefficient but simple and powerful engine: fast, shallow rivers; abundant wood for fuel along the shores of those rivers; and the geographic configuration of the United States after the Louisiana Purchase, with a high ridge of mountains separating the coast from a massive navigable inland watershed. But, Escher-like, the steamboat then looped back around to reshape the environment from which it had emerged. Just as steam-powered factories had, steam transport flattened out the cycles of nature, bulldozing the hills and valleys of time and space. Before the Washington’s journey, the shallow grade that distinguished upstream from downstream dominated the life of any traveler or trader in the Mississippi. Now goods and people could move easily upriver, in defiance the dictates of gravity.[32] By the 1840s, steamboats were navigating well inland on other rivers of the West as well: up the Tombigbee, for example, over 200 miles inland to Columbus, Mississippi.[33] What steamboats alone could not do to turn the western waters into turnpike roads, Shreve and others would impose on them through brute force. Steamboats frequently sank or took major damage from snags or “sawyers”: partially submerged tree limbs or trunks that obstructed the water ways. In some places, vast masses of driftwood choked the entire river. Beyond Natchitoches, the Red River was obstructed for miles by an astonishing tangle of such logs known as the Great Raft.[34] Figure 5: A portrait of Shreve of unknown date, likely the 1840s. The scene outside the window reveals one of his snagboats, a frequently used device in nineteenth century portraits of inventors. Not only commerce was at stake in clearing the waterways of such obstructions; steamboats would be vital to any future war in the West. As early as 1814, Andrew Jackson had put Shreve’s Enterprise to good use, ferrying supplies and troops around the Mississippi delta region.[35] With the encouragement of the Monroe administration, therefore, Congress stepped in with a bill in 1824 to fund the Army’s corps of engineers to improve the western rivers. Shreve was named superintendent of this effort, and secured federal funds to build snagboats such as the Heliopolis, twin-hulled behemoths designed to drive a snag between its hulls and then winch it up onto the middle deck and saw it down to size. Heliopolis and its sister ships successfully cleared large stretches of the Ohio and Mississippi.[36] In 1833, Shreve embarked on the last great venture of his life: an assault on the Great Raft itself. It took six years and a flotilla of rafts, keelboats and steamboats to complete the job, including a new snagboat, Eradicator, built specially for the task.[37] The clearing of waterways, technical advancements in steamboat design, and other improvements (such as the establishment of fuel depots, so that time was not wasted stopping to gather wood), combined to drive travel times along the rivers down rapidly. In 1819, the James Ross completed the New Orleans to Louisville passage in sixteen-and-a-half days. In 1824 the President covered the same distance in ten-and-a-half days, and in 1833 the Tuscorora clocked a run of seven days, six hours. These ever-decreasing record times translated directly into ever-decreasing shipping rates. Early steamboats charged upstream rates equivalent to those levied by their keelboat competitors: about five dollars per hundred pounds carried from New Orleans to Louisville. By the early 1830s this had dropped to an average of about sixty cents per 100 pounds, and by the 1840s as low as fifteen cents.[38] By decreasing the cost of river trade, the steamboat cemented the economic preeminence of New Orleans. Cotton, sugar, and other agricultural goods (much of it produced by slave labor) flowed downriver to the port, then out to the wider world; manufactured goods and luxuries like coffee arrived from the ocean trade and were carried upriver; and human traffic, bought and sold at the massive New Orleans slave market, flowed in both directions.[39] In 1820 a steamboat arrived in New Orleans about every other day. By 1840 the city averaged over four arrivals a day; by 1850, nearly eight.[40] The population of the city burgeoned to over 100,000 by 1840, making it the third-largest in the country. Chicago, its big-shouldered days still ahead of it, remained a frontier outpost by comparison, with only 5,000 residents. Figure 6: A Currier & Ives lithograph of the New Orleans levee. This represents a scene from the late nineteenth century, way past the prime of New Orleans’ economic dominance, but still shows a port bustling with steamboats. But both New Orleans and the steamboat soon lost their dominance over the western economy. As Mark Twain wrote: Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature.[41] Several forces connived in the murder of the Mississippi steamboat, but a close cousin lurked among the conspirators: another form of transportation enabled by the harnessing of high-pressure steam. The story of the locomotive takes us back to Britain, and the dawn of the nineteenth century.

Read more
The Era of Fragmentation, Part 1: Load Factor

By the early 1980s, the roots of what we know now as the Internet had been established – its basic protocols designed and battle-tested in real use – but it remained a closed system almost entirely under the control of a single entity, the U.S. Department of Defense. Soon that would change, as it expanded to academic computer science departments across the U.S. with CSNET. It would continue to grow from there within academia, before finally opening to general commercial use in the 1990s.But that the Internet would become central to the coming digital world, the much touted “information society,” was by no means obvious circa 1980. Even for those who had heard of it, it remained little more than a very promising academic experiment. The rest of the world did not stand still, waiting with bated breath for its arrival. Instead, many different visions for bringing online services to the masses competed for money and attention.Personal ComputingBy about 1975, advances in semiconductor manufacturing had made possible a new kind of computer. At few years prior, engineers had figured out how to pack the core processing logic of a computer onto a single microchip – a microprocessor. Companies such as Intel began to offer high-speed short-term memory on chips as well, to replace the magnetic core memory of previous generations of computers. This brought the most central and expensive parts of the computer under the sway of Moore’s Law, which, in turn, drove the unit price of chip-based computing and memory relentlessly downward for decades to come. By the middle of the decade, this process had already brought the price of these components low enough that a reasonably comfortable middle-class American might consider buying and building a computer of his or her own. Such machines were called microcomputers (or, sometimes, personal computers).The claim to the title of the first personal computer been fiercely contested, with some looking back as far as Wes Clark’s LINC or the Lincoln Labs TX-0, which, after all, were wielded interactively by a single user at a time. Putting aside strict questions of precedence, any claimant to significance based on historical causality must concede to one obvious champion. No other machine had the catalytic effect that the MITS Altair 8800 had, in bringing about the explosion of microcomputing in the late 1970s.The Altair 8800, atop optional 8-inch floppy disk unitThe Altair fell into the electronic hobbyist community like a seed crystal. It convinced hobbyists that it was possible for a person build and own their own computer at a reasonable price, and they coalesced into communities to discuss their new machines, like the Homebrew Computer Club in Menlo Park. Those hobbyist cells then launched the much wider wave of commercial microcomputing based on mass-produced machines that required no hardware skills to bring to life, such as the Apple II and Radio Shack TRS-80.By 1984, 8% of U.S. households had their own computer, a total of some seven million machines1. Meanwhile, businesses were acquiring their own fleets of personal computers at the rate of hundreds of thousands per year, mostly the IBM 5150 and its clones2. At the higher end of the price range for single-user computers, a growing market had also appeared for workstations from the likes of Silicon Graphics and Sun Microsystems – beefier computers equipped standard with high-end graphical displays and networking hardware, intended for use by scientists, engineers and other technical specialists.None of these machines would be invited to play in the rarefied world of ARPANET. Yet many of their users wanted access to the promised fusion of computers and communications that academic theorists had been talking up in the popular press since Taylor and Licklider’s 1968 “Computer As a Communication Device,” and even before. As far back as 1966, computer scientist John McCarthy had promised in Scientific American that “[n]o stretching of the demonstrated technology is required to envision computer consoles installed in every home and connected to public-utility computers through the telephone system.”  The range of services such a system could offer, he averred, would be impossible to enumerate, but he put forth a few examples: “Everyone will have better access to the Library of Congress than the librarian himself now has. …Full reports on current events, whether baseball scores, the smog index in Los Angeles or the minutes of the 178th meeting of the Korean Truce Commission, will be available for the asking. Income tax returns will be automatically prepared on the basis of continuous, cumulative annual records of income, deductions, contributions and expenses.”Articles in the popular press described the possibilities for electronic mail, digital games, services of all kinds from legal and medical advice to online shopping. But how, practically, would all these imaginings take shape? Many answers were in the offing. In hindsight, this era bears the aspect of a broken mirror. All of the services and concepts that would characterize the commercial internet of the 1990s – and then some – were manifest in the 1980s, but in fragments, scattered piecemeal across dozens of different systems. With a few exceptions3, these systems did not interconnect, each stood isolated from the others, a “walled garden,” in later terminology. Users on one system had no way to communicate or interact with those on another, and the quest to attract more users was thus for the most part a zero-sum game.In this installment, we’ll consider one set of participants in this new digital land grab, time-sharing companies looking to diversity into a new market with attractive characteristics.Load FactorIn 1892, Samuel Insull, a protégé of Thomas Edison, headed west and to lead a new  branch of Edison’s electrical empire, the Chicago Edison Company. There he consolidated many of the core principles of modern utility management, among them the concept of the load factor – the average load on the electrical system divided by its highest load. The higher the load factor the better, because any deviation below 1/1 represents waste – expensive capital capacity that’s needed to handle the peak of demand, but left idle in the troughs. Insull therefore set out to fill in the troughs in the demand curve by developing new classes of customers that would use electricity at different times of day (or even in different seasons), even if it meant offering them discounted rates. In the early years of electrical power, the primary demand came from domestic lighting, with most demand in the evening. So Insull promoted its use for industrial machinery to increase daytime use. This still left dips in the morning and evening rush, so he convinced the Chicago streetcar systems convert to electrical traction. And so Insull maximized the value of his capital investments, even though it often meant offering lower prices[^hughes].Insull in 1926, when he was pictured on the cover of Time magazine.[^hughes]: Thomas P. Hughes, Networks of Power (1983), 216-225. The same principles still applied to capital investments in computers nearly a century later, and it was exactly the desirability of a balanced load factor and the incentive for offering lower off-peak prices that made possible two new online services for microcomputers that launched nearly simultaneously in the summer of 1979: CompuServe and The Source.CompuServeIn 1969, the newly-formed Golden United Life Insurance company of Columbus, Ohio created a subsidiary called the Compu-Serv Network. The founder of Golden United wanted to be a cutting-edge, high-tech company with computerized records, and so he had hired a young computer science grad named John Goltz to lead the effort. Goltz, however, was gulled by a DEC salesman into buying a PDP-10, an expensive machine with far more computer power than Golden United currently needed. The idea behind Compu-Serv was to turn that error into an opportunity, by selling the excess computer power to paying customers who would dial into the Compu-Serv PDP-10 via a remote terminal. In the late 1960s this time-sharing model for selling computer service was spreading rapidly, and Golden United wanted to get its own cut of the action. In the 1970s the time-sharing subsidiary spun off to operate independently, re-branded itself as CompuServe, and built its own packet-switching network in order to be able to offer affordable, nationwide access to its computer centers in Columbus.A national market not only gave the company access to more potential customers, it also extended the demand curve for computer time, by spreading it across four time zones. Nonetheless, there were still a large gulf of time between the end of business hours in California and the start of business on the East Coast, not to mention the weekends. CompuServe CEO Jeff Wilkins saw an opportunity in the growing fleet of home computers, many of whose owners whiled away their evening and weekend hours on their electronic hobby. What if they were offered access to email, message boards, and games on CompuServe computers, at discounted rates for evening and weekend access ($5 an hour, versus $12 during the work day4)?So Wilkins launched a trial of a service he called MicroNET (intentionally held at arms length from the main CompuServe brand) and after a slow start it gradually proved a resounding success. Because of CompuServe’s national data network, most users only had to dial a local number to reach MicroNET, and thus avoided long-distance telephone charges, despite the fact that the actual computers they were connecting to resided in Ohio. His experiment having proved itself, Wilkins dropped the MicroNET name and folded the service under the CompuServe brand. Soon the company began to offer services tailored to the needs of microcomputer users, such as games and other software available for sale on-line.But by far the most popular services were the communications platforms. For long-lived public content and discussions there were the forums, ranging across every topic from literature to medicine, from woodworking to pop music. Forums were generally left to their own devices by CompuServe, being administered and moderated by ordinary users who took on the role of “sysops” for each forum. The other main communications platform was the “CB Simulator”, coded up over the weekend by Sandy Trevor, a CompuServe executive. Named after citizen band (CB) radio, a popular hobby at the time, it allowed users to have text-based chats in real-time in dedicated channels, a similar model to the ‘talk’ programs offered on many time-sharing systems. Many dedicated users would hang out for hours on CB Simulator, shooting the breeze, making friends, or even finding lovers.The SourceHot on the heels of MicroNET – launching just eight days later in July of 1979 – came another on-line service for microcomputers that arrived at essentially the same place as Jeff Wilkins, despite starting from a very different angle. William (Bill) Von Meister, a son of German immigrants, whose father had helped establish zeppelin service between Germany and the U.S., was a serial enterpreneur. He no sooner got some new enterprise off the ground than he lost interest, or was forced out by disgruntled financial backers. He could not have been more different than the steady Wilkins. As of the mid-1970s, his greatest successes to date were in electronic communications – Telepost, a service which sent messages across the country electronically to the switching center nearest its recipient, and then covered the last mile via next-day mail; and TDX, which used computers to optimize the routing of telephone calls, reducing the cost of long-distance telephone service within large businesses.Having, predictably, lost interest in TDX, Von Meister’s newest enthusiasm in the late 1970s was Infocast, which he planned to launch in McClean, Virginia. In effect, it was an extension of the Telepost concept, except instead of using mail for the last mile delivery, he would use the FM radio sideband (basically the same mechanism that’s used to transmit station identification, artist, and song title to the screens of modern radios) to deliver digital data to computer terminals. In particular, he planned to target highly distributed business with lots of locations that needed regular information updates from their central office, such as banks, insurance companies, and grocery stores.Bill Von MeisterBut what Von Meister really wanted to build was a national network to deliver data into homes, to terminals by the millions, not thousands.  Convincing a business to spend $1000 on a special FM receiver and terminal was one thing, however, to ask the same of consumers was quite another matter. So Von Meister went casting about for another means to deliver news, weather, and other information into homes; and he found it, in the hundreds of thousands of microcomputers that were sprouting like mushrooms in american offices and dens, in homes ready-equipped with telephone connections. He partnered with Jack Taub, a deep-pocketed and well-connected businessman who loved the concept and wanted to invest. Taub and Von Meister initially called the new service CompuCom, a mix of truncation and compounding typical for a computer company of the day, but later settled on a much more abstract and visionary name – The Source.The main problem they faced was a lack of any technical infrastructure with which to deliver this vision. To get it they partnered with two companies with, collectively, the same resources as CompuServe – time-shared computers and a national data communications network, both of which sat mostly idle on evenings and weekends. Dialcom, headquartered across the Potomac in Silver Springs, Maryland, provided the computing muscle. Like CompuServe, it had begun in 1970 as a time-sharing service5, though by the end of the decade it offered many other digital services. Telenet, the packet-switched network spun off by Bolt, Beranek and Newman earlier in the decade, provided the communications infrastructure. By paying discounted rates to Dialcom and Telenet for off-peak service, Taub and Von Meister were able to offer access to The Source for $2.75 an hour on nights and weekends, after an initial $100 membership fee6Other than the pricing structure, the biggest difference between The Source and CompuServe was how they expected people to use their systems. The early services that CompuServe offered, such as email, the forums, CB, and the software exchange, generally assumed that users would form their own communities and build their own superstructures atop a basic hardware and software foundation, much like corporate users of time-sharing systems. Taub and Von Meister, however, had no cultural background in time-sharing. Their business plan centered around providing large amounts of information for the upscale, professional consumer: a New York Times database, United Press International news wires, stock information from Dow Jones, airline pricing, local restaurant guides, wine lists. Perhaps the single most telling detail was that Source users were welcomed by a menu of service options on log-in, CompuServe users by a command line.In keeping with the personality differences between Wilkins and Von Meister, the launch of The Source was as grandiose as MicroNET’s was subtle, including a guest appearance by Isaac Asimov to announce the arrival of science fiction become science fact. Likewise in keeping with Von Meister’s personality and his past, his tenure at The Source would not be lengthy. The company immediately ran into financial difficulties due to his massive overspending. Taub and his brother had a large enough ownership share to oust Von Meister, and they did just that in October of 1979, just a few months after the launch party.The Decline of Time-SharingThe last company to enter the microcomputing market due to the logic of load factor was General Electric Information Services (GEIS), a division of the electrical engineering giant. Founded in the mid-1960s, when GE was still trying to compete in the computer manufacturing business, GEIS was conceived as a way to try to outflank IBM’s dominant position in computer sales. Why buy from them, GE pitched, when you can rent from us? The effort made little dent in IBM’s market share, but made enough money to receive continued investment into the 1980s, by which point GEIS owned a worldwide data network and two major computing centers one of them in Cleveland, Ohio and the other in Europe.In 1984, someone at GEIS noticed the growth of The Source and CompuServe (the latter had, by that time, over 100,000 users), and saw a way to put their computing centers to work in off-peak hours. To build their own consumer offering they recruited a CompuServe veteran, Bill Louden. Louden, disgruntled with managers from the corporate sales side who began muscling in on the increasingly lucrative consumer business, had jumped ship with a group of fellow defectors to try to build their own online service in Atlanta, called Georgia OnLine. They tried to turn the lack of access to a national data network into a virtue, by offering services tailored for the local market, such as an events guide and classified ads, but the company went bust, so Louden was very receptive to the offer from GEIS.Louden called the new service GEnie, a backronym for General Electric Network for Information Exchange. It offered all of the services that The Source and CompuServe had by now made table stakes in the market – a chat application (CB simulator), bulletin boards, news, weather, and sports information.GEnie was the last personal computing service born out of the time-sharing industry and the logic of the load factor. By the mid-1980s, the entire economic balance of power had begun to shift. As small computers proliferated in the millions, offering digital services to the mass market became a more and more enticing business in its own right, rather than simply a way to leverage existing capital. In the early days, The Source and CompuServe were tiny, with only a few thousand subscribers each in 1980. A decade later, millions of subscribers paid monthly for on-line services in the U.S. – with CompuServe at the forefront of the market, having absorbed its erstwhile rival, The Source. The same process also made time-sharing less attractive to businesses – why pay all the telecommunications costs and overhead of accessing a remote computer owned by someone else, when it was becoming so easy to equip your own office with powerful machines? Not until fiber optics drove the unit cost of communications into the ground would this logic reverse direction again.Time-sharing companies were not the only route to the consumer market, however. Rather than starting with mainframe computers and looking for places to put them to work, others started from the appliance that millions already had in their homes, and looked for ways to connect it to a computer.

Read more
High Pressure, Part 2: The First Steam Railway

Railways long predate the steam locomotive. Trackways with grooves to keep a wheeled cart on a fixed path date back to antiquity (such as the Diolkos, which could carry a naval vessel across the Isthmus of Corinth on a wheeled truck). The earliest evidence for carts running atop wooden rails, though, comes from the mining districts of sixteenth century Europe. Agricola describes a kind of primitive railway used by German miners in his 1556 treatise De Re Metallica. Agricola reports that the miners ran trucks called Hunds (“dogs”) (supposedly because of the barking noise they made while in motion) over two parallel wooden planks. A metal pin protruding down from the truck into the gap between the planks kept it from rolling off the track.[1] This system allowed a laborer to carry far more material out of the mine in a single trip than they could by carrying it themselves. British Railways Wooden railways called “waggon ways” are first attested in the coal-mining areas of Britain around 1600. These differed in two important ways from earlier mining carts: first, they ran outside the mine, carrying coal a short distance (perhaps a mile or two) to the nearest high-quality road or navigable waterway from which it could be brough to market. Second, they were drawn by horses, at least on the uphill courses—on some eighteenth-century wagon ways, the horse actually caught a ride downhill, standing on a flat carriage behind the cart. Flanged wheels to keep the wagon on the track were also probably introduced around this time. Both wheels and rails were still constructed of wood, however, which limited the load the wagons could carry.[2] By the middle of the eighteenth century, waggon ways crisscrossed the mining districts of northern England, especially around the coalfields, creating a substantial trade in birch wheels and rails of beech or ash from the South. They were called by many different names, such as “gangways,” “plateways,” “tramways,” or “tramroads.” Colliers invested sophisticated engineering into their design, using bridges, causeways, and tunnels to create a smooth grade from the pithead to the point of embarkation (such as the Tyne or the Severn rivers).[3] Most were no more than a mile or two long, but some ran as far as ten miles. They were smooth enough that a single horse could haul several times on rails what it could on an ordinary eighteenth-century road: the figures given by various sources for the load of a horse-drawn rail carriage range from two to ten tons, likely depending on the grade of the railway and the material composition of the rails and wheels.[4] The Little Eaton Gangway, a railway built in the 1790s, that, incredibly, continued to operate until 1908, when this photo was taken. It carried coal five miles down to the Derby Canal. This close-up of the Little Eaton Gangway shows clearly the design of the railbed, with L-shaped rails to hold the wagon on the track, and stone blocks underneath to which they were nailed. The Penydarren railway, discussed below, had the same design. This may seem prologue enough, but two further milestones in the development of railways still intervened before the steam locomotive came into the picture. Around the late 1760s, the Darbys of Coalbrookdale step into our history once more. They are reputed to have been the first to introduce durable cast iron plates to strengthen the rails that they used to carry materials among their various Shropshire properties.[5] Later the Darbys and others introduced fully cast-iron rails, doing away with wood altogether. With this change in material the railways of England (already intimately linked with coal mining) now became fully enmeshed in the cycle of the triumvirate—coal, iron, and steam—well before they became steam-powered. Then, in 1799, came the first public horse-drawn railway. Up to this time, all railways  served the needs of a single owner (though some required an easement across neighboring properties), typically a mining concern. But the Surrey Iron Railway, which ran from Croydon (south of London) up to the Thames at Wandsworth, was open to any paying cargo, much like a turnpike road or a canal. Among the backers of the Surrey Iron Railway was a Midlands colliery owner, William James, who will have an important part to play later in our story.[6] So, although we think of them now as two components of a single technological system, the locomotive and the railway did not start out that way. Instead, the locomotive appeared on the scene as an alternative way of hauling freight over an already familiar and well-established transportation medium. Trevithick Richard Trevithick was the first Englishman to attempt this substitution. He was born in 1771, in the heart of the copper-mining region of Cornwall. His birthplace, the village of Illogan, sat beneath the weathered hill of Carn Brea, said to be the ancient dwelling place of a giant.[7] But the only giants still found upon the landscape of eighteenth-century Cornwall breathed steam. They sheltered in the stone engine houses that still dot the countryside today, and raised water from the bottom of the mine, allowing the proprietors to delve ever deeper into the earth. Trevithick’s father was a mine “captain,” a high-status position with the responsibilities of a general manager and some of the same cachet among the mining community as a sea captain would have in a nautical community. This included the privilege of an honorific title: he was “Captain Trevithick” to his neighbors. The elder Trevithick’s work included serving as mine engineer and assayer, and he would have been familiar with all the technical workings of the mine, from the digging equipment to the pumping engine. The younger Trevithick must have learned well from his father. At fifteen, he was employed by his father at Dolcoath, the most lucrative copper mine of the region. By age 21, having grown into something of a giant himself—standing a burly six feet two, his pastimes were said to include hurling sledgehammers over buildings—the miners of Cornwall already consulted him for his expertise on steam engines.[8] Linnell, John; Richard Trevithick (1771-1833); Science Museum, London ; http://www.artuk.org/artworks/richard-trevithick-17711833-179865 " data-medium-file="https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9" data-large-file="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=739" loading="lazy" src="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=831" alt="" class="wp-image-14451" width="566" height="696" srcset="https://cdn.accountdigital.net/FnhVecf3Lm75yyCxkoj00B9ZFNOG 566w, https://cdn.accountdigital.net/FikdTskwdmnAoksYCE_1j4a5Fl2B 122w, https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9 244w, https://cdn.accountdigital.net/FgMy22xNugUYyjZ88KOSKWMwaLrv 768w, https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j 974w" sizes="(max-width: 566px) 100vw, 566px">A portrait of Trevithick painted in 1816, when he was 45. He gestures to the Andes of Peru in the background, where Trevithick intended, at the time, to make his fortune in silver mining. By the 1790s, Boulton and Watt were about as popular in Cornwall as Fulton and Livingston were in the American West, and for the same reason: they were seen as grasping monopolists who kept the miners of Cornwall, who depended on effective pumps for their livelihood, in thrall to the Watt patent. Fifteen years earlier, Watt’s efficient engines had appeared as a lifeline to copper mines suffering under competition from the prodigious Parys Mountain in Anglesey, whose ample ores could be cheaply mined directly from the surface.[9] But as the mines continued to struggle, Boulton and Watt began to take shares in mines in lieu of payment, and set up a headquarters at Cusgarne, right in the copper district, to oversee their investments. One of their most skilled mechanics, William Murdoch, moved to Cornwall and acted as their local agent. To the copper miners, Boulton and Watt began to look like meddlers as well as leeches. By the 1790s, Anglesey ran out of easy-to-reach ore, and the fortunes of the Cornwall copper mines began to look up. With their mutual enemy gone, the grudging partnership between the Cornish miners and Boulton and Watt soured rapidly. The Dolcoath Copper Mine, Camborne, Cornwall, circa 1831. (Photo by Hulton Archive/Getty Images) " data-medium-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300" data-large-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=739" loading="lazy" width="902" height="637" src="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=902" alt="" class="wp-image-14453" srcset="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96 902w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=150 150w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300 300w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=768 768w" sizes="(max-width: 902px) 100vw, 902px">An 1831 engraving of Dolcoath copper mine, in Cornwall. Trevithick, a hot-headed young man, took up the banner of revolution against the Boulton and Watt regime in 1792, fighting a series of legal battles on behalf of the competing engine design of Edward Bull. By 1796 every battle had been lost—Bull and Trevithick’s attempt to defy the Watt patent had failed, and there seemed to be nothing for the Cornwall interests to do but wait for the expiration of its term, in 1800.[10] But Trevithick found another way forward: strong steam. More than any other element, the separate condenser distinguished Watt’s patent engine from its predecessors. By shedding the condenser and operating well above atmospheric pressure instead, Trevithick could avoid claims of infringement. Concerned that releasing uncondensed steam would waste all the power of the engine, he consulted Cornwall’s resident mathematician, Davies Giddy. Giddy reassured him that he would waste a fixed amount of power equal to the weight of the atmosphere, and would gain some compensation in return by saving the power required to work an air pump and lift water into the condenser.[11] As in the U.S., then, the socioeconomic environment pushed steam engine users on the periphery toward high-pressure, though in this case it was the presence of a rival patent rather than an absence of capital resources. Trevithick saw an immediate application for high-pressure steam as a replacement for the horse whim, an animal-powered lift which worked alongside the pumping engine in many Cornish mines, usually in the same vertical shaft, to raise ore and dross from below. A few whims had been installed with Watt engines, but Trevithick’s “puffers” (so called for the visible puff of exhaust steam they released) cost less to build and transport. The compact high-pressure engine also fit much more comfortably in the engine house alongside the pumping engine than a second Watt behemoth would.  An 1806 Trevithick stationary steam engine, minus the flywheel it would have had at the time to maintain a steady motion. Note how the exhaust flue comes out of the middle of the cylindrical boiler, the same return-flue design used by Evans to extract additional heat from the hot gases of the furnace. Trevithick’s engines thus began replacing horse whims in engine houses across Cornwall in the early 1800s.[12] The Watt interests were not happy: much later in life Trevithick claimed that Watt (probably referring in this case to the belligerent James Watt, Jr., the inventor’s son), “said to an eminent scientific character still living that I deserved hanging for bringing into use the high pressure,” presumably because of the danger of explosion.[13] One of Trevithick’s boilers, installed to drain the foundation for a corn mill in Greenwich, did in fact explode in 1803 when left unattended, and the Watts did not miss the opportunity to get in their “I told you sos” in the press.[14] In future engines Trevithick would include two safety valves, plus a plug soldered with lead as a final safety measure: if the water level fell too low, the heat would melt the solder and blow out the plug, relieving excess pressure. But Trevithick’s interest had by this time already wandered from staid industrial applications to the more romantic dream of a steam carriage. Steam Carriage As we have seen already several times in this story, many inventors and philosophers had dreamed the same dream, dating back well over a century. To realize how readily available the idea of a steam carriage was, we must remember that steam power’s job, in a sense, had always been to replace either horse- or water-power, and that carriages were the most ubiquitous piece of horse-powered machinery around in early modern Europe. The first person we know of to successfully build a steam carriage (if we construe success loosely), was a French army officer named Nicolas-Joseph Cugnot. More specifically, he built a steam fardier, a cart for pulling cannon. It was a curious looking tricycle with the boiler hanging off the front like an elephantine proboscis. Cugnot carried out some trial runs of his vehicle in 1769, but with no way to refill the boiler while in use, it had to stop every fifteen minutes to let the boiler cool, refill it, and work up steam once more. This was a curiosity without real practical value.[15] Cugnot’s Fardier à Vapeur, preserved at the Musée des Arts et Métiers in Paris. Trevithick probably never heard of Cugnot, but he certainly knew William Murdoch, Watt’s representative in Cornwall. Murdoch began experimenting with high-pressure steam carriages in the 1780s, and built a three-wheeled carriage that (like Cugnot’s cart) survives today in a museum. Unlike Cugnot’s, vehicle however, Murdoch’s surviving machine is a model, no more than a foot tall. Lacking the backing of his employers, who disliked strong steam and found the carriage concept unpromising if not ridiculous, Murdoch’s tinkerings did not even get as far as Cugnot’s. There is no evidence that he ever built a full-sized carriage. [16] Editing Undertaken: Levels, Unsharp Mask " data-medium-file="https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw" data-large-file="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=739" loading="lazy" src="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=1024" alt="" class="wp-image-14457" width="561" height="421" srcset="https://cdn.accountdigital.net/FpqH97sUpkaqcDXUxuJIXFhnlUoO 561w, https://cdn.accountdigital.net/FjVNxoFYXseTtLwNZJAeiqhVHwOf 150w, https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw 300w, https://cdn.accountdigital.net/FtfsjNccReQS7wrfGvu9EyUGjYcr 768w, https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T 1024w" sizes="(max-width: 561px) 100vw, 561px">Murdoch’s model steam carriage. It’s unclear why Trevithick decided to build a steam-powered vehicle—he may have been trying to develop a portable engine that could be moved between work sites under its own power. It is possible that Trevithick got the idea for a steam carriage from Murdoch, but, as we have seen, the idea was commonplace. In the execution of that idea, Trevithick went far beyond his predecessor. He began work on his steam carriage in late 1800, with the help of his cousin Andrew Vivian and several other local craftsmen. He already had in hand his high-pressure engine design, with a very favorable power-to-weight ratio compared to a Watt engine. A small and light engine was advantageous in a steamboat, but it was crucial in a land vehicle that had to rest on wheels and fit on narrow roads. He used the same return-flue boiler design as Oliver Evans had; given the distance and timing, they almost certainly arrived at this idea independently. Many wise men of the time doubted that a self-driving wheel was even possible, arguing that it would simply spin in place without an animal with traction to pull it. Trevithick therefore felt it necessary to first disprove this theory (in an experiment probably devised by Giddy) by sitting in a chaise with his compatriots, and moving the vehicle by turning the wheels with their hands.[17] In December 1801 they went for their first steam-powered ride. What exactly the first carriage looked like is unknown, but it was likely a simple wheeled platform with engine and boiler mounted atop it and a crude lever for steering. Years later one “old Stephen Williams” (not so old at the time) would recall: I was a cooper by trade, and when Captain Dick [Trevithick] was making his first-steam carriage I used to go every day into John Tyack’s blacksmiths’ shop at the Weith, close by here, where they were putting it together. …In the year of 1801, upon Christmas-eve, coming on evening, Captain Dick got up steam, out in the high road… we jumped up as many as could; may be seven or eight of us. ‘Twas a stiffish hill going from the Weith up to Cambourne Beacon, but she went off like a little bird.[18] Within days, this first carriage quite literally crashed and burned (though the burning was apparently caused by leaving the carriage unattended with the firebox lit, not by the crash itself).[19] Nonetheless, Trevithick formed a partnership with his cousin Vivian to develop both the high-pressure engine and its use in carriages, and they went to London to seek a patent and additional backers and advisers, including such scientific luminaries as Humphrey Davy and Count Rumford. They had a second carriage built, this one designed as a true passenger vehicle with a compartment to accommodate eight. Giddy nicknamed it “Trevithick’s Dragon.” It worked better than the first attempt, running a good eight miles-per-hour on level ground, but the ride was rough. For some decades, steel spring suspensions had been standard on carriages, but the direct geared linkage between the drive wheels and the engine on Trevithick’s carriage did not allow for them to move independently.[20] The steering mechanism also worked poorly. In one early trial Trevithick tore the rail from a garden wall, and Vivian’s relative Captain Joseph Vivian (actually a sea captain) reported after a drive that he “thought he was more likely to suffer shipwreck on the steam-carriage than on board his vessel…”[21] It offered no obvious advantages over a horse carriage to offset the loss of comfort and control, not to mention the risk of fire and explosion. The Dragon attracted some curious onlookers, but no investors. Steam Railway If steam-powered vehicles on water found success first in the U.S. because alternative modes of inland transportation were lacking, steam-powered vehicles on land found success first in Britain because the transportation medium to support them already existed. The railways offered the perfect solution for the problems of Trevithick’s steam carriage: a road without cobbles or ruts to jounce on, a road that steered the carriage for you, and a road with no passengers to annoy or endanger. But Trevithick was not positioned to see it, because Cornwall did not have railways of any kind (its first, the Portreath Tramroad was not constructed until 1812). It would take a new connection to link the engine born out of the struggle with Watt over the mines of Cornwall to the rails created to solve the problems of northern coalfields. On business in Bristol in 1803, Trevithick made that connection, when he met a Welsh ironmaster named Samuel Homfray, who provided him with fresh capital in exchange for a share in his patent, and solicited his aid in building steam engines for his ironworks, called Penydarren. It also happened that Homfray also had part ownership of a railway, and the opportunity thus arose to marry high-pressure steam to rails. For Homfray this was also an opportunity to show up a rival. He and several other ironmasters had invested in a canal to carry their wares down to the port at Cardiff, but the controlling partner, Richard Crawshay, demanded exclusive privileges over the waterway. Homfray and several of the other partners exploited a loophole to bypass Crawshay. At the time, any public thoroughfare (on land or water) required an act of Parliament to approve its construction. The act approving the Cardiff canal also allowed for the construction of railways within four miles of the canal. The intent of this was to allow for feeder lines. Rails, at the time, were a strictly secondary transportation system. They provided “last-mile” service from mining centers to a navigable waterway. A boom in canal building that began in the later eighteenth century extended and interconnect those waterways, which offered far lower transportation costs than any form of land transportation. If a horse could pull several times the weight on a railway that it could on an ordinary road, it could pull several times more again when hitched to a canal barge.[22] (The plummeting transportation costs brought about by the ability to float cargo to the coast from nearly any town in England by horse-drawn barge account for the lack of British interest in riverine steamboats.) So the goal was almost always to get goods to water as quickly as possible. The trick that Homfray and his allies pulled was to build a railway as a primarytransportation link in its own right, paralleling the canal for over nine miles, rather than connecting directly to it, and thereby neutering Crawshay’s privileges.[23] It was on this railway that Homfray (or perhaps Trevithick, which partner initiated the idea is unknown) proposed to replace horse power with steam power. Crawshay found the concept laughable. Like many of his contemporaries, he believed that the smooth wheels would find no purchase on smooth rails, and would simply spin in place. The ironmasters placed a not-so-friendly wager of 500 guineas over whether Trevithick could build a locomotive to haul ten tons of iron the length of the railway. On February 21st, 1804, Crawshay lost. As Trevithick reported to Giddy: Yesterday we proceeded on our journey with the engine; we carry’d ten tons of Iron, five waggons, and 70 Men riding on them the whole of the journey. Its above 9 miles which we perform’d in 4 hours & 5 Mints, but we had to cut down som trees and remove some Large rocks out of road. The engine, while working, went nearly 5 miles pr hour; …We shall continue to work on the road, and shall take forty tons the next journey. The publick untill now call’d mee a schemeing fellow but now their tone is much alter’d.[24] We should not picture the Penydarren engine in the mind’s eye as the iconic, fully-developed steam locomotive of the mid-19th century. The railbed itself looked very different than what we might imagine: the cast-iron rails were outward-facing Ls, whose vertical stroke kept the wheels from leaving the track. Nails driven into two parallel rows of stone blocks held the rails in place. This arrangement avoided having perpendicular rail ties (or sleepers, as the British call them) that could trip up the horses, who walked between the rails as they pulled their cargo. Trevithick’s locomotive resembled a stationary engine jury-rigged to a wheeled platform. A crosshead and large gears carried power from the cylinder down to the left-hand wheels (only, the right side received no power), and a flywheel kept the vehicle from lurching each time the piston reached the dead center position. Trevithick’s goal was to show off the versatility of high-pressure steam, not to launch a railroad revolution. A replica showing what the Penydarren locomotive may have looked like. Note the fixed gearing system for delivering power to the two wheels in the foreground, the flywheel in the background, and the L-shaped rails. Notice also how much it resembles Trevithick’s stationary steam engine, with additional mechanisms to transmit power to the wheels. The Penydarren locomotive performed several more trial runs; on at least one, the rails cracked under the engine’s weight: a portent of a major technical obstacle yet to be overcome before steam railways could find lasting success. Trevithick then seems to have removed the engine and put it to work running a hammer in the ironworks; what became of the rest of the vehicle is unknown.[25] Many other endeavors captured Trevithick’s attention in the following years; among them stationary engines at Penydarren and elsewhere, steam dredging experiments, and a scheme to use a steam tug to drag a fireship into the midst of Napoleon’s putative invasion fleet at Bolougne (as we have seen, Robert Fulton was at this time trying to sell the British government on his “torpedoes” to serve the same purpose). In 1808, he made once last stab at steam locomotion, a demonstration vehicle called the Catch-me-who-can that ran over a temporary circular track in London. Again, rail breakage proved a problem. Trevithick hoped to earn some money from paying riders and to attract the interest of investors, but he failed on both accounts.[26] The reasons for the lack of interest are clear. Trevithick’s locomotives were neither much faster nor obviously cheaper than a team of horses, and they came with a host of new, unsolved technical problems. Twenty more years would elapse before rails would begin to seriously challenge canals as major transport arteries for Britain, not mere peripheral capillaries. To make that happen would require improvements in locomotives, better rails, and a new way of thinking about the comparative economics of transportation. Trevithick himself had twenty-five more years of restless, peripatetic life ahead of him, much of it spent on fruitless mining ventures in South and Central America. In an irresistible historical coincidence, in 1827, at the end of a financially ruinous trip to Costa Rica, he crossed paths with another English engineer named Robert Stephenson. Stephenson gave the downtrodden older man fifty pounds to help him get home. After a spate of mostly failed or abortive projects, Trevithick died in 1833. The one item of real wealth remaining to him, a gold watch brought back from South America, went to defray his funeral expenses.[27] Young Stephenson, however, returned to much brighter prospects in England. He and his father would soon redeem the promise hinted at by the trials at Penydarren.

Read more
ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more
Steam Revolution: The Turbine

Incandescent electric light did not immediately snuff out all of its rivals: the gas industry fought back with its own incandescent mantle (which used the heat of the gas to induce a glow in another material) and the arc lighting manufacturers with a glass-enclosed arc bulb.[1] Nonetheless, incandescent lighting grew at an astonishing pace: the U.S. alone had an estimated 250,000 such lights in use by 1885, three million by 1890 and 18 million by the turn of the century.[2] Edison’s electric light company expanded rapidly across the U.S. and into Europe, and its success encouraged the creation of many competitors. An organizational division gradually emerged between manufacturing companies that built equipment and supply companies that used it to generate and deliver power to customers. A few large competitors came to dominate the former industry: Westinghouse Electric and General Electric (formed from the merger of Edison’s company with Thomson-Houston) in the U.S., and the Allgemeine Elektricitäts-Gesellschaft (AEG) and Siemens in Germany. In a sign of its gradual relative decline, Britain produced only a few smaller firms, such as Charles Parsons’ C. A. Parsons and Company—of whom more later.  In accordance with Edison’s early imaginings, manufacturers and suppliers expanded beyond lighting to general-purpose electrical power, especially electric motors and electric traction (trains, subways, and street cars). These new fields opened up new markets for users: electric motors, for example, enabled small-scale manufacturers who lacked the capital for a steam engine or water wheel to consider mechanization, while releasing large-scale factories from the design constraints of mechanical power transmission. They also provided electrical supply companies with a daytime user base to balance the nighttime lighting load. The demands of this growing electric power industry pushed steam engine design to its limits. Dynamos typically rotated hundreds of times a minute, several times the speed of a typical steam engine drive shaft. Engineers overcame this with belt systems, but these gave up energy to friction. Faster engines that could drive a dynamo directly required new high-speed valve control machinery, new cooling and lubrication systems to withstand the additional friction, and higher steam pressures more typical of marine engines than factories. That, in turn, required new boiler designs like the Babcock and Wilcox, which could operate safely at pressures well over 100 psi.[3] A high-speed steam engine (made by the British firm Willans) directly driving a dynamo (the silver cylinder at left). From W. Norris and Ben. H. Morgan, High Speed Steam Engines, 2nd edition (London: P.S. King & Son, 1902), 13. But the requirement that ultimately did in the steam engine was not for speed, but for size. As the electric supply companies evolved into large-scale utilities, providing power and light to whole urban centers and then beyond, they demanded more and more output from their power houses. Even Edison’s Pearl Street station, a tiny installation when looking back from the perspective of the turn of the century, required multiple engines to supply it. By 1903, the Westminster Electric Supply Corporation, which supplied only a part of London’s power, required forty-nine Willans engines in three stations to provide about 9 megawatts of power (an average of about 250 horsepower an engine). But demand continued to grow, and engines grew in response. Perhaps the largest steam engines ever built were the 12,000 horsepower giants designed by Edwin Reynolds and installed in 1901 for the Manhattan Elevated Railway Company and in 1904 for the Interborough Rapid Transit (IRT) subway company. Each of these engines actually consisted of two compound engines grafted together, each with its own high- and low-pressure cylinder, set at right angles to give eight separate impulses per rotation to the spinning alternator (an alternating current dynamo). The combined unit, engine and alternator, weighed 720 tons. But the elevated railway required eight of these monsters, and the IRT expected to need eleven to meet its power needs. The IRT’s power house, with a Renaissance Revival façade designed by famed architect Stanford White, filled a city block near the Hudson River (where it still stands today).[4] The inside of the IRT power house, with five engines installed. Each engine consists of two towers, with a disc-shaped dynamo between them. From Scientific American, October 29th, 1904. How much farther the reciprocating steam engine might have been coaxed to grow is hard to say with certainty, because even as the IRT powerhouse was going up in Manhattan, it was being overtaken by a new power technology based on whirling rotors instead of cycling pistons, the steam turbine. This great advancement in steam power borrowed from developments that had been brewing for decades in its most long-standing rival, water power. Niagara The signature electrical project of the turn of the twentieth century was the Niagara Falls Power Company. The immense scale of its works, its ambitions to distribute power over dozens of miles, its variety of prospective customers, and its adoption of alternating current: all signaled that the era of local, Pearl Street-style direct-current electric light plants was drawing to a close. The tremendous power latent in Niagara’s roaring cataract as it dropped from the level of Lake Erie to that of Lake Ontario was obvious to any observer—engineers estimated its potential horsepower in the millions—the problem was how to capture it, and where to direct it. By the late nineteenth century, several mills had moved to draw off some of its power locally. But Niagara could power thousands of factories, with each having to dig its own canals, tunnels and wheel pits to draw off the small fraction of waterfall that it required. New York State law, moreover, forbid development in the immediate vicinity of the falls to protect its scenic beauty. The solution ultimately decided on was to supply power to users from a small number of large-scale power plants, and the largest nearby pool of potential users lay in Buffalo, about twenty miles away.[5] The Niagara project originated in the 1886 designs of New York State engineer Thomas Evershed for a canal and tunnel lined with hundreds of wheel pits to supply power to an equal number of local factories. But the plan took a different direction in 1889 after securing the backing of a group of New York financiers, headed once again by J.P. Morgan. The Morgan group consulted a wide variety of experts in North America and Europe before settling on an electric power system as the best alternative, despite the unproven nature of long-distance electric power transmission. This proved a good bet: by 1893, Westinghouse had proved in California that it could deliver high-voltage alternating current over dozens of miles, convincing the Niagara company to adopt the same model.[6] Cover of the July 22, 1899 issue of Scientific American with multiple views of the first Niagara Falls Power Company power house and its five-thousand-horsepower turbine-driven generators. By 1904, the company had completed canals, vertical shafts for the fall of water, two powerhouses with a total capacity of 110,000 horsepower, and a mile-long discharge tunnel. They supplied power to local industrial plants, the city of Buffalo, and a wide swath of New York State and Ontario.[7] The most important feature of the power plant for our story, however, were the Westinghouse generators driven by water turbines, each with a capacity of 5,000 horsepower each. As Terry Reynolds, a historian of the waterwheel, put it, this was “more than ten times [the capacity] of the most powerful vertical wheel ever built.”[8] Water turbines had made possible the exploitation of water power on a previously inconceivable scale; appropriately so, for they originated from a hunger on the European continent for a power that could match British steam. Water Turbines The exact point at which a water wheel becomes a turbine is somewhat arbitrary; a turbine is simply a kind of water wheel that has reached a degree of efficiency and power that earlier designs could not approach. But the distinction most often drawn is in terms of relative motion: the water in a traditional wheel pushes the vane along with the same speed and direction as its own flow (like a person pushing a box along the floor). A turbine, on the other hand, creates “motion of the water relative to the buckets or floats of the wheel” in order to extract additional energy: that is to say, it uses the kinetic energy of the water as well as its weight or pressure. That can occur through either impulse (pressing water against the turning vanes), or reaction (shooting water out from them to cause them to turn) but very often includes a combination of both.[9] The exact origins of the horizontal water wheel are unknown, but they had been used in Europe since at least the late Middle Ages. They offered by far the simplest way to drive a millstone, since it could be attached directly to the wheel without any gearing, and remained in wide use in poorer regions of the continent well into the modern period. For centuries, the manufacturers and engineers of Western Europe focused their attention on the more powerful and efficient vertical water wheel, and this type constitutes most of our written record of water technology. Going back to the Renaissance, however, descriptions and drawings can be found of horizontal wheels with curved vanes intended to capture more of the flow of water, and it was the application of rigorous engineering to this general idea that led to the modern turbine. The turbine was in this sense the revenge of the horizontal water wheel, transforming the most low-tech type of water wheel into the most sophisticated. All of the early development of the water turbine occurred in France, which could draw on a deep well of hydraulic theory but could not so easily access coal and iron to make steam as could their British neighbors. Bernard Forest de Belidor, an eighteenth-century French engineer, recorded in his 1737 treatise on hydraulic engineering the existence of some especially ingenious horizontal wheels, used to grind flour at Bascale on the Garonne. They had curved blades fitted inside a surrounding barrel and angled like the blades of a windmill, such that “the water that pushes it works it with the force of its weight composed with the circular motion given to it by the barrel…”[10] Nothing much came of this observation for another century, but Belidor had identified what we could call a proto-turbine, where water not only pushed on the vanes but also glided down through them like the breeze on the arms of a windmill, capturing more of its energy. The horizontal mill wheels observed on the Garonne by Belidor. From Belidor, Architecture hydraulique vol. 1, part 2, Plan 5. In the meantime, theorists came to an important insight. Jean-Charles de Borda, another French engineer (there will be a lot of them in this part of the story), was only a small child in a spa town just north of the Pyrenees when Belidor was writing about water wheels. He studied mathematics and wrote mathematical treatises, became an engineer for the Army and then the Navy, undertook several scientific voyages, fought in the American Revolutionary War, and headed the commission that established the standard length of the meter. In the midst of all this he found some time in 1767 to write up a study on hydraulics for the French Academy of Sciences, in which he articulated the principle that, to extract the most power from a water wheel, the water should enter the machine without shock and leave it without velocity. Lazare Carnot, father of Sadi, restated this principle some fifteen years later, in a treatise that reached a wider audience than de Borda’s paper.[11] Though it is obviously impossible for the water to literally leave the wheel without velocity (for after all without velocity it would never leave), it was through striving for this imaginary ideal that engineers developed the modern, highly efficient water turbine. First came Jean-Victor Poncelet (from now on, if I mention someone, just assume they are French), another military engineer who had accompanied Napoleon’s Grande Armée into Russia in 1812, where he ended up a prisoner of war for two years. After returning home to Metz he became the professor of mechanics at the local military engineering academy. While there he turned his mind to vertical water wheels, and a long-standing tradeoff in their design: undershot wheels, in which the water passed under the wheel, were cheaper to construct but not very efficient, while overshot wheels, where the water came to the top of the wheel and fell on its vanes or buckets, had the opposite attributes. Poncelet combined the virtues of both by applying the principle of de Borda and Carnot. The traditional undershot waterwheel had a maximum theoretical efficiency of 50%, because the ideal wheel turned at half the speed of the water current, allowing the water to leave the vanes of the wheel behind with half of its initial velocity. The appearance of cheap sheet iron had made it possible to substitute metal vanes for wooden, and iron vanes could easily be bent in a curve. By curving the vanes of the wheel just so towards the incoming water, Poncelet found that it would run up the cupped vane, expending all of its velocity, and then fall out of the bottom of the wheel.[12] He published his idea in 1825 to immediate acclaim: “no other paper on water-wheels… had proved so interesting and commanded such attention.”[13] The Poncelet water wheel. Poncelet’s advance hinted at the possibility of a new water-powered industrial future for France. His wheel design soon became a common sight in a France eager to develop its industrial might, and richer in falling water than in reserves of coal. It inspired the Société d’Encouragement pour l’Industrie Nationale, an organization founded in 1801 to push France to be more industrially competitive with Britain, to offer a prize of 6,000 francs to anyone who “would apply on a large scale, in a satisfactory manner, in factories and manufacturing works, the water turbines or wheels with curved blades of Belidor.” The revenge of the horizontal wheel was at hand.[14] Benoît Fourneyron, an engineer at a water-powered ironworks in the hilly country near the Swiss border, claimed the prize in 1833. Even before the announcement of the prize, he had, in fact, already undertaken a deep study of hydraulic theory, reading up on Borda and his successors. He had devised and tested an improved “Belidor-style” wheel, applying the curved metal vanes of Poncelet to a horizontal wheel situated in a barrel-shaped pit, which we can fairly call the first modern water turbine. He went on to install over a hundred of these turbines around Europe, but his signal achievement was the 1837 spinning mill amid the hills of the Black Forest in Baden, which took in a head of water falling over 350 feet and generated sixty horsepower at 80% efficiency. The spinning rotor of the turbine responsible for this power was a mere foot across and weighed only forty pounds. A traditional wheel could neither take on such a head of water nor derive so much power, so efficiently, from such a compact machine.[15] The Fourneyron turbine. The inflowing water, from the reservoir A drives the rotor before emptying from its radial exterior into the basin D. From Eugène Armengaud, Traité théorique et pratique des moteurs hydrauliques et a vapeur, nouvelle edition (Paris: Armengaud, 1858), 279. Steam Turbines The water turbine was thus a far smaller and more efficient machine than its ancestor, the traditional water wheel. Its basic form had existed since at least the time of Belidor, but to achieve an efficient, high-speed design like Fourneyron’s required a body of engineers deeply educated in mathematical physics and a surrounding material culture capable of realizing those mathematical ideas in precisely machined metal. It also required a social context in which there existed demand for more power than traditional sources could ever provide: in this case, a France racing to catch up with rapidly industrializing Britain. The same relation held between the steam turbine and the reciprocating steam engine: the former could be much more compact and efficient, but put much higher demands on the precision of its design and construction. It was no great leap to imagine that steam could drive a turbine in the same way that water did: through the reaction against or impulse from moving steam. One could even look to some centuries-old antecedents for inspiration: the steam-jet reaction propulsion of Heron’s of Alexandria’s whirling “engine” (mentioned much earlier in this history), or a woodcut in Giovanni Branca’s seventeenth-century Le Machine, which showed the impulse of a steam jet driving a horizontal paddlewheel.   But it is one thing to make a demonstration or draw a picture, and another to make a useful power source. A steam turbine presented a far harder problem than a water turbine, because steam was so much less dense than liquid water. Simply transplanting steam into a water turbine design would be like blowing on a pinwheel: it would spin, but generate little power.[16] The difficulty was clear even in the eighteenth century: when confronted in 1784 with reports of a potential rival steam engine driven by the reaction created by a jet of steam, James Watt calculated that, given the low relative density of steam, the jet would have to shoot from the ends of the rotor at 1,300 feet per second, and thus “without god makes it possible for things to move 1000 feet [per second] it can not do much harm.” As historian of steam Henry Dickinson epitomized Watt’s argument, “[t]he analysis of the problem is masterly and the conclusion irrefutable.”[17] Even when future generations of metal working made the speeds required appear more feasible, one could get nowhere with traditional “cut and try” techniques with ordinary physical tools; the problem demanded careful analysis with the precision tools offered by mathematics and physics.[18] Dozens of inventors took a crack at the problem, nonetheless, including another famed steam engine designer, Richard Trevithick. None found success. Though Fourneyron had built an effective water turbine in the 1830s, the first practical steam turbines did not appear until the 1880s: a time when metallurgy and machine tools had achieved new heights (with mass-produced steels of various grades and qualities available) and a time when even the steam engine was beginning to struggle to sate modern society’s demand for power. It first appeared in two places more or less at once: Sweden and Britain. Gustaf de Laval burst from his middle-class background in the Swedish provinces into the engineering school at Uppsala with few friends but many grandiose dreams: he was the protagonist in his own heroic tale of Swedish national greatness, the engineering genius who would propel Sweden into the first rank of great nations. He lived simultaneously in grand style and constant penury, borrowing from his visions for an ever more prosperous tomorrow to live beyond his means of today. In the 1870s, while working a day job at a glassworks, he developed two inventions based on centrifugal force generated by a rapidly spinning wheel. The first, a bottle-making machine, flopped, but the second, a cream separator, became the basis for a successful business that let him leave his day job behind.[19] Then, in 1882 he patented a turbine powered by a jet of steam directed at a spinning wheel. De Laval claimed that his inspiration came from seeing a nozzle used for sandblasting at the glassworks come loose and whip around, unleashing its powerful jet into the air; it is also not hard to see some continuity in his interest in high-speed rotation. De Laval used his whirling turbines to power his whirling cream separators, and then acquired an electric light company, giving himself another internal customer for turbine power.[20] Though superficially similar to de Branca’s old illustration, de Laval’s machine was far more sophisticated. As Watt had calculated a century earlier, the low density of steam demanded high rotational speeds (otherwise the steam would escape from the machine having given up very little energy to the wheel) and thus a very high-velocity jet: de Laval’s steel rotor spun at tens of thousands of rotations per minute in an enclosed housing. A few years later he invented an hourglass-shaped nozzle to propel the steam jet to supersonic speeds, a shape that is still used in rocket engines for the same purpose today. Despite the more advanced metallurgy of the late-nineteenth century, however, de Laval still ran up against its limits: he could not run his turbine at the most efficient possible speed without burning out his bearings and reduction gear, and so his turbines didn’t fully capture their potential efficiency advantage over a reciprocating engine.[21] Cutaway view of a de Laval turbine, from William Ripper, Heat Engines (London: Longmans, Green, 1909), 234. Meanwhile, the British engineer Charles Parsons came up with a rather different approach to extracting energy from the steam, which didn’t require such rapid rotation. Whereas De Laval strove up from the middle class, Parsons came from the highest gentry. Son of the third Earl of Rosse, he grew up in a castle in Ireland, with grounds that included a lake and a sixty-foot-long telescope constructed to his father’s specifications. He studied at home under, Robert Ball, who later became the Astronomer Royal of Ireland, then went on to graduate from Cambridge University in 1877 as eleventh wrangler—the eleventh best in his class on the mathematics exams.[22] Despite his noble birth, Parsons appeared determined to find his own way in the world. He apprenticed himself at Elswick Works, a manufacturer of heavy construction and mining equipment and military ordnance in Newcastle on Tyne. He spent a couple years with a partner in Leeds trying to develop rocket-powered torpedoes before taking up as a junior partner at another heavy engineering concern, Clarke Chapman in Gateshead (back on the River Tyne).[23] His new bosses directed Parsons away from torpedoes toward the rapidly growing field of electric lighting. He turned to the turbine concept in search of a high-speed rotor that could match the high rotational speeds of a dynamo. Parsons came up with a different solution for the density problem than Laval’s. Rather than try to extract as much power as possible from the steam jet with one extremely fast rotor, he would send the steam through a series of rotors arranged horizontally. They would then not have to spin so quickly (though Parson’s first prototype still ran at 18,000 rotations per minute), and each could extract a bit of energy from the steam as it flowed through the turbine, dropping in pressure. This design extended the two-or three- stages of pressure reduction in a multi-cylinder steam engine into a continuous flow across a dozen or more rotors. Parsons’ approach created some new challenges (keeping the long, rapidly spinning shaft from bowing too far in one direction or the other, for example) but ultimately most future steam turbines would copy this elongated form.[24] Parson’s original prototype turbine and dynamo, with the top removed. Steam entered at the center and exited from both ends, which eliminated the need to deal with “end thrust,” a force pushing on one end of the turbine. From Dickinson, A Short History of the Steam Engine, plate vii. The Rise of Turbines Parsons soon founded his own firm to exploit the turbine. Because it has far less inherent friction than the piston of a traditional engine, and because none of its parts have to touch both hot and cold steam, a turbine had the potential to be much more efficient, but they didn’t start out that way. So his early customers were those who cared mainly about the smaller size of turbines: shipbuilders looking to put in electric lighting without adding too much weight or using too much space in the hull. In other applications reciprocating engines still won out.[25] Further refinements, however, allowed turbines to start to supplant reciprocating engines in electrical systems more generally: more efficient blade designs, the addition of a regulator to ensure that steam entered the turbine only at full pressure, the superheating of steam at one end and the condensing of it at the other to maximize the fall in temperature across the entire engine. Turbo-generators—electrical dynamos driven by turbines—began to find buyers in the 1890s. By 1896, Parsons could boast that a two-hundred-horsepower turbine his firm constructed for a Scottish electric power station ran at 98% of its ideal efficiency, and Westinghouse had begun to develop turbines under license in the United States.[26] Cutaway view of a fully developed Parsons-style turbine. Steam enters at left (A) and passes through the rotors to the right. From Ripper, Heat Engines, 241. At the same time, Parsons was pushing for the construction of ships with turbine powerplants, starting with the prototype Turbinia, which drove nine propellers with three turbines and achieved a top speed of nearly forty miles-per-hour. Suitably impressed, the British Admiralty ordered turbine-powered destroyers (starting with Viper in 1897), but the real turning point came in 1906 with the completion of the first turbine-driven battleship (Dreadnought) and transatlantic steamers (Lusitania and Muaretania), all supplied with Parsons powerplants.[27] HMS Dreadnought was remarkable not only for her armament and armor, but also for her speed of 21 knots (24 miles per hour), made possible by Parsons turbines. The very first steam turbines had demonstrated their advantage over traditional engines in size; a further decade-and-a-half of development allowed them to realize their potential advantages in efficiency; and now these massive vessels made clear their third advantage: the ability to scale to enormous power outputs. As we saw, the monster steam engines at the subway power house in New York could generate 12,000 horsepower, but the turbines aboard Lusitiania churned out half again as much, and that was far from the limit of what was possible. In 1915, the Interborough Rapid Transit Company, facing ever-growing demand for power with the addition of a third (express) track to its elevated lines, installed three 40,000 horsepower turbines for electrical generation, obsoleting Reynolds’ monster engines of a decade earlier. By the 1920s, 40,000 horsepower turbines were being built in the U.S., and burning half as much coal per watt of power generated as the most efficient reciprocating engines.[28] Parsons lived to see the triumph of his creation. He spent his last years cruising the world, and preferred to spend the time between stops talking shop with the crew and engineers rather than lounging with other wealthy passengers. He died in 1931, at age 76, in the Caribbean while aboard ship on the (turbine-powered of course) Duchess of Richmond.[29] Meanwhile, power usage shifted towards electricity, made widely available by the growth of steam and water turbines and the development of long-distance power transmission, not by traditional steam engines. Niagara was just a foretaste of the large-scale water power projects made feasible by the newly found capacity to transmit that power wherever it was needed: the Hoover Dam and Tennessee Valley Authority in the U.S., the Rhine power dams in Europe, and later projects intended to spur the modernization of poorer countries, from the Aswan Dam on the Nile and the Gezhouba Dam on the Yangtze. In regions with easy access to coal, however, steam turbines provided the majority of all electric power until far in the twentieth century. Cheap electricity transformed industry after industry. By 1920, manufacturing consumed half of the electricity produced in the U.S., mainly through dedicated electric motors at each tool, eliminating the need for the construction and maintenance of a large, heavy steam engine and for bulky and friction-heavy shafts and belts to transmit power through the factory. The capital barriers to starting a new manufacturing plant thus dropped substantially along with the recurring cost of paying for power, and the way was opened to completely rethink how manufacturing plants were built and operated. Factories became cleaner, safer, and more pleasant to work in, and the ability to organize machines according to the most efficient work process rather than the mechanical constraints of power delivery produced huge dividends in productivity.[30] A typical pre-electricity factory power distribution system, based on line shafts and belts (in this case driving power looms). All the machines in the factory have to be organized around the driveshafts. [Z22, CC BY-SA 3.0] The 1910 Ford Highland Park plant represents a hybrid stage on the way to full electrification of every machine; the plant still had overhead line shafts (here for milling engine blocks), but each area was driven by a local electric motor, allowing for a much more flexible arrangement of machinery. By that time, the heyday of the piston-driven steam engine was over. For large-scale installations, it could no longer compete with turbines (whether powered by liquid water or steam). At the same time, feisty new competitors, diesel and gasoline engines, were gnawing away at its share of the lower horsepower market. The warning shot fired by the air engine had finally caught up to steam. It could not outrun thermodynamics, and the incredibly energy-dense new fuel source that had come bubbling up out of the ground: rock oil, or petroleum.

Read more
Internet Ascendant, Part 1: Exponential Growth

In 1990, John Quarterman, a networking consultant and UNIX expert, published a comprehensive survey of the state of computer networks. In a brief section on the potential future for computing, he predicted the appearance of a single global network for “electronic mail, conferencing, file transfer, and remote login, just as there is now one worldwide telephone network and one worldwide postal system.” But he did not assign any special significance to the Internet in this process. Instead, he assumed that the worldwide net would “almost certainly be run by government PTTs”, except in the United States, “where it will be run by the regional Bell Operating Companies and the long-distance carriers.” It will be the purpose of this post to explain how, in a sudden eruption of exponential growth, the Internet so rudely upset these perfectly natural assumptions. Passing the Torch The first crucial event in the creation of the modern Internet came in the early 1980s, when the Defense Communication Agency (DCA) decided to split ARPANET in two. The DCA had taken control of the network in 1975. By that time, it was clear that it made little sense for the ARPA Information Processing Techniques Office (IPTO), a blue sky research organization, to be involved in running a network that was being used for participants’ daily communications, not for research about communication. ARPA tried and failed to hand off the network to private control by AT&T. The DCA, responsible for the military’s communication systems, seemed the next best choice. For the first several years of this new arrangement, ARPANET prospered under a regime of benign neglect. However, by the early 1980s, the Department of Defense’s aging data communications infrastructure desperately needed an upgrade. The intended replacement, AUTODIN II, which DCA had contracted with Western Union to construct, was foundering. So DCA’s leaders appointed Colonel Heidi Hieden to come up with an alternative. He proposed to use the packet-switching technology that DCA already had in hand, in the form of ARPANET, as the basis for the new defense data network. But there was an obvious problem with sending military data over ARPANET – it was rife with long-haired academics, including some who were actively hostile to any kind of computer security or secrecy, such as Richard Stallman and his fellow hackers at the MIT Artificial Intelligence Lab. Heiden’s solution was to bifurcate the network. He would leave the academic researchers funded by ARPA on ARPANET, while splitting the computers used at national defense sites off onto a newly formed network called MILNET. This act of mitosis had two important consequences. First, by decoupling the militarized and non-militarized parts of the network, it was the initial step toward transferring the Internet to civilian, and eventually, private, control. Secondly, it provided the proving ground for the seminal technology of the Internet, the TCP/IP protocol, which had first been conceived half a decade before. DCA required all the ARPANET nodes to switch over to TCP/IP from the legacy protocol by the start of 1983. Few networks used TCP/IP at that point in time, but now it would link the two networks of the proto-Internet, allowing message traffic to flow between research sites and defense sites when necessary. To further ensure the long-term viability of TCP/IP for military data networks, Heiden also established a $20 million fund to pay computer manufacturers to write TCP/IP software for their systems (1). This first step in the gradual transfer of the Internet from the military to private control provides as good an opportunity as any to bid farewell to ARPA and the IPTO. Its funding and influence, under the leadership of J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, had produced, directly or indirectly, almost all of the early developments in interactive computing and networking. The establishment of the TCP/IP standard in the mid-1970s, however, proved to be the last time it played a central role in the history of computing (2). The Vietnam War provided the decisive catalyst for this loss of influence. Most research scientists had embraced the Cold war defense-sponsored research regime as part of a righteous cause to defend democracy. But many who came of age in the 1950s and 1960s lost faith in the military and its aims due to the quagmire in Vietnam. That included Taylor himself, who quit IPTO in 1969, taking his ideas and his connections to Xerox PARC. Likewise, the Democrat-controlled Congress, concerned about the corrupting influence of military money on basic scientific research, passed amendments requiring defense money to be directed to military applications. ARPA reflected this change in funding culture in 1972 by renaming itself as DARPA, the Defense Advanced Research Projects Agency. And so the torch passed to the civilian National Science Foundation (NSF). By 1980, with $20 million dollars in funding, the NSF accounted for about half of federal computer science research spending in the U.S, about $20 million (3). Much of that funding would soon be directed toward a new national computing network, the NSFNET. NSFNET In the early 1980s, Larry Smarr, a physicist at the University of Illinois, visited the Max Planck Institute in Munich, which hosted a Cray supercomputer that it made readily available to European researchers. Frustrated at the lack of equivalent resources for scientists in the U.S., he proposed that the NSF fund a series of supercomputing centers across the country (4). The organization responded to Smarr and other researchers with similar complaints by creating the Office of Advanced Scientific Computing in 1984, which went on to fund a total of five such centers, with a total five-year budget of $42 million. They stretched from Cornell in the northeast of the country to San Diego in the southwest. In between, Smarr’s own university (Illinois) received its own center, the National Center for Supercomputing Applications (NCSA). But these centers alone would only do so much to improve access to computer power in the U.S. Using the computers would still be difficult for users not local to any of the five sites, likely requiring a semester or summer fellowship to fund a long-term visit. And so NSF decided to also build a computer network. History was repeating itself – making it possible to share powerful computing resources with the research community was exactly what Taylor had in mind when he pushed for the creation of ARPANET back in the late 1960s. The NSF would provide a backbone that would span the continent by linking the core supercomputer sites, then regional nets would connect to those sites to bring access to other universities and academic labs. Here NSF could take advantage of the support for the Internet protocols that Heiden had seeded, by delegating the responsibility of creating those regional networks to local academic communities. Initially, the NSF delegated the setup and operation of the network to the NCSA at the University of Illinois, the source of the original proposal for a national supercomputer program. The NCSA, in turn, leased the same type of 56 kilobit-per-second lines that ARPANET had used since 1969, and began operating the network in 1986. But traffic quickly flooded those connections (5). Again mirroring the history of ARPANET, it quickly became obvious that the primary function of the net would be communications among those with network access, not the sharing of computer hardware among scientists. One can certainly excuse the founders of ARPNET for not knowing that this would happen, but how could the same pattern repeat itself almost two decades later? One possibility is that it’s much easier to justify a seven-figure grant to support the use of eight figures worth of computing power, than to justify dedicating the same sums to the apparently frivolous purpose of letting people send email to one another. This is not to say that there was willful deception on the part of the NSF, but that just as the anthropic principle posits that the physical constants of the universe are what they are because otherwise we couldn’t exist to observe them, so no publicly-funded computer network could have existed for me to write about without a somewhat spurious justification. Now convinced that the network itself was at least as valuable as the supercomputers that had justified its existence, NSF called on outside help to upgrade the backbone with 1.5 megabit-per-second T1 lines (6). Merit Network, Inc., won the contract, in conjunction with MCI and IBM, securing $58 million in NSF funding over an initial five year grant to build and operate the network. MCI provided the communications infrastructure, IBM the computing hardware and software for the routers. Merit, a non-profit that ran a computer network that linked the University of Michigan campuses (7), brought experience operating an academic computer network, and gave the whole partnership a collegiate veneer that made it more palatable to NSF and the academics who used NSFNET. Nonetheless, the transfer of operations from NCSA to Merit was a clear first step towards privatization. Traffic flowed through Merit’s backbone from almost a dozen regional networks, from the New York State Education and Research Network (NYSERNet), interconnected at Cornell in Ithaca, to the California Education and Research Federation Network (CERFNet -no relation to Vint), which interconnected at San Diego. Each of these regional networks also internetted with countless local campus networks, as Unix machines appeared by the hundreds in college labs and faculty offices. This federated network of networks became the seed crystal of the modern Internet. ARPANET had connected only well-funded computer researchers at elite academic sites, but by 1990 almost anyone in post-secondary education in the U.S – faculty or student – could get online. There, via packets bouncing from node to node – across their local Ethernet, up into the regional net, then leaping vast distances at light speed via the NSFNET backbone – they could exchange email or pontificate on Usenet with their counterparts across the country. With far more academic sites now reachable via NSFNET than ARPANET, The DCA decommissioned that now-outmoded network in 1990, fully removing the Department of Defense from involvement in civilian networking. Takeoff Throughout this entire period, the number of computers on NSFNET and its affiliated networks – which we may now call the Internet (8) – was roughly doubling each year. 28,000 in December 1987, 56,000 in October 1988, 159,000 in October 1989, and so on. It would continue to do so well into the mid-1990s, at which point the rate slowed only slightly (9). The number of networks on the Internet grew at a similar rate – from 170 in July of 1988 to 3500 in the fall of 1991. The academic community being an international one, many of those networks were overseas, starting with connections to France and Canada in 1988. By 1995, the Internet was accessible from nearly 100 countries, from Algeria to Vietnam (10). Though it’s much easier to count the number of  machines and networks than the number of actual users, reasonable estimates put that latter figure at 10-20 million by end of 1994 (11).  Any historical explanation for this tremendous growth is challenging to defend in the absence of detailed data about who was using the Internet for what, at what time. A handful of anecdotes can hardly suffice to account for the 350,000 computers added to the Internet between January 1991 and January 1992, or the 600,000 in the year after that, or the 1.1 million in the year after that. Yet I will dare to venture onto this epistemically shaky ground, and assert that three overlapping waves of users account for the explosion of the Internet, each with their own reasons for joining, but all drawn by the inexorable logic of Metcalfe’s Law, which indicates that the value (and thus the attractive force) of a network increases with the square of its number of participants. First came the academic users. The NSF had intentionally spread computing to as many universities as possible. Now every academic wanted to be on board, because that’s where the other academics were. To be unreachable by Internet email, to be unable to see and participate in the latest discussions on Usenet, was to risk missing an important conference announcement, a chance to find a mentor, cutting-edge pre-publication research, and more. Under this pressure to be part of the online academic conversation, universities quickly joined onto the regional networks that could connect them to the NSFNET backbone. NEARNET, for example, which covered the six states of the New England region, grew to over 200 members by the early 1990s. At the same time, access began to trickle down from faculty and graduate students to the much larger undergraduate population. By 1993, roughly 70% of the freshman class at Harvard had edu email accounts. By that time the Internet also became physically ubiquitous at Harvard and its peer institutions, which went to considerable expense to wire Ethernet into not just every academic building, but even the undergrad dormitories (12). It was surely not long before the first student stumbled into his or her room after a night of excess, slumped into their chair, and laboriously pecked out an electronic message that they would regret in the morning, whether a confession of love or a vindictive harangue. In the next wave, the business users arrived, starting around 1990. As of that year, 1151 .com domains had been registered. The earliest commercial participants came from the research departments of high-tech companies (Bell Labs, Xerox, IBM, and so on) They, in effect, used the network in an academic capacity. Their employers’ business communications went over other networks. By 1994, however, over 60,000 .com domain names existed, and the business of making money on the Internet had begun in earnest (13).  As the 1980s waned, computers were becoming a part of everyday life at work and home in the U.S, and the importance of a digital presence to any substantial business became obvious. Email offered easy and extremely fast communication with co-workers, clients, and vendors. Mailing lists and Usenet provided both new ways of keeping up to date with a professional community, and new forms of very cheap advertising to a generally affluent set of users. A wide variety of free databases could be accessed via the Internet – legal, medical, financial, and political. New graduates arriving into the workforce from fully-wired campuses also became proselytes for the Internet at their employers. It offered access to a much larger set of users than any single commercial service (Metcalfe’s Law again), and once you paid a monthly fee for access to the net, almost everything else was free, unlike the marginal hourly and per-message fees charged by CompuServe and its equivalents. Early entrants to the Internet marketplace included mail-order software companies like The Corner Store of Litchfield, Connecticut, which advertised in Usenet discussion groups, and The Online Bookstore, an electronic books seller founded over a decade before the Kindle by a former editor at Little, Brown (14). Finally came the third wave of growth, the arrival of ordinary consumers, who began to access the Internet in large numbers in the mid-1990s. By this point Metcalfe’s Law was operating in overdrive. Increasingly, to be online meant to be on the Internet. Unable to afford T1 lines to their homes, consumers almost always accessed the Internet over a dial-up modem. We have already seen part of that story, with the gradual transformation of commercial BBSes into commercial Internet Service Providers (ISPs). This change benefited both the users (whose digital swimming pool suddenly grew into an ocean) and the BBS itself, which could run a much simpler business as an intermediary between the phone system and a T1 on-ramp to the Internet, without maintaining their own services. Larger online services followed a similar pattern. By 1993, all of the major national-scale services in the U.S. – Prodigy, CompuServe, GEnie and upstart America Online (AOL) – offered their 3.5 million combined subscribers the ability to send email to Internet addresses. Only laggard Delphi (with less than 100,000 subscribers), however, offered full Internet access (15). Over the next few years, though, the value of access to the Internet – which continued to grow exponentially – rapidly outstripped that of accessing the services’ native forums, games, shopping and other content. 1996 was the tipping point – by October of that year, 73% of those online reported having used the World Wide Web, compared to just 21% a year earlier (16). The new term “portal” was coined, to describe the vestigial residue of content provided by AOL, Prodigy, and others, to which people subscribed mainly to get access to the Internet.  The Secret Sauce We have seen, then, something of how the Internet grew so explosively, but not quite enough to explain why. Why, in particular, did it become so dominant in the face of so much prior art, so many other services that were striving for growth during the era of fragmentation that preceded it? Government subsidy helped, of course. The funding of the backbone aside, when NSF chose to invest seriously in networking as an independent concern from its supercomputing program, it went all in. The principal leaders of the NSFNET program, Steve Wolff and Jane Caviness, decided that they were building not just a supercomputer network, but a new information infrastructure for American colleges and universities. To this end, they set up the Connections program, which offset part of the cost for universities to get onto the regional nets, on the condition that they provide widespread access to the network on their campus. This accelerated the spread of the Internet both directly and indirectly. Indirectly, since many of those regional nets then spun-off for-profit enterprises using this same subsidized infrastructure  to sell Internet access to businesses. But Minitel had subsidies, too. The most distinct characteristic of the Internet, however, was it layered, decentralized architecture, and attendant flexibility. IP allowed networks of a totally different physical character to share the same addressing system, and TCP ensured that packets were delivered to their destination. And that was all. Keeping the core operations of the network simple allowed virtually any application to be built atop it. Most importantly, any user could contribute new functionality, as long as they could get others to run their software. For example, file transfer (FTP) was among the most common uses of the early Internet, but it was very hard to find servers that offered files of interest for download except by word-of-mouth. So enterprising users built a variety of protocols to catalog and index the net’s FTP servers, such as Gopher, Archie, and Veronica. The OSI stack also had this flexibility, in theory, and the official imprimatur of international organizations and telecommunications giants as the anointed internetworking standard. But possession is nine-tenths of the law, and TCP/IP held the field, with the decisive advantage of running code on thousands, and then millions, of machines. The devolution of control over the application layer to the edges of the network had another important implication. It meant that large organizations, used to controlling their own bailiwick, could be comfortable there. Businesses could set up their own mail servers and send and receive email without all the content of those emails sitting on someone else’s computer. They could establish their own domain names, and set up their own websites, accessible to everyone on the net, but still entirely within their own control.  The World Wide Web – ah – that was the most striking example, of course, of the effects of layering and decentralized control. For two decades, systems from the time-sharing systems of the 1960s through to the likes of CompuServe and Minitel had revolved around a handful of core communications services – email, forums, and real-time chat. But the Web was something new under the sun. The early years of the web, when it consisted entirely of bespoke, handcrafted pages, were nothing like its current incarnation. Yet bouncing around from link to link was already strangely addictive – and it provided a phenomenally cheap advertising and customer support medium for businesses. None of the architects of the Internet had planned for the Web. It was the brainchild of Tim Berners-Lee, a British engineer at the European Organization for Nuclear Research (CERN), who created it in 1990 to help disseminate information among the researchers at the lab. Yet could easily rest atop TCP/IP, and re-use the domain-name system, created for other purposes, for its now-ubiquitous URLs. Anyone with access to the Internet could put up a site, and by the mid-1990s it seemed everyone had – city governments, local newspapers, small businesses, and hobbyists of every stripe.  Privatization In this telling of the story of the Internet’s growth, I have elided some important events, and perhaps left you with some pressing questions. Notably, how did businesses and consumers get access to an Internet centered on NSFNET in the first place – to a network funded by the U.S. government, and ostensibly intended to serve the academic research community? To answer this, the next installment will revisit some important events which I have quietly passed over, events which gradually but inexorably transformed a public, academic Internet into a private, commercial one. [Previous] [Next] Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) John S. Quarterman, The Matrix (1990) Peter H. Salus, Casting the Net (1995) Footnotes Note: The latest version of the WordPress editor appears to have broken markdown-based footnotes, so these are manually added, without links. My apologies for the inconvenience. Abbate, Inventing the Internet, 143. The next time DARPA would initiate a pivotal computing project was with the Grand Challenges for autonomous vehicles of 2004-2005. The most famous project in-between, the billion-dollar AI-based Strategic Computing Initiative of the 1980s, produced a few useful applications for the military, but no core advances applicable to the civilian world.  “1980 National Science Foundation Authorization, Hearings Before the Subcommittee on Science, Researce [sic] and Technology of the Committee on Science and Technology”, 1979.  Smarr, “The Supercomputer Famine in American Universities” (1982)  A snapshot of what this first iteration of NSFNET was like can be found in David L. Mills, “The NSFNET Backbone Network” (1987)  The T1 connection standard, established by AT&T in the 1960s, was designed to carry twenty-four telephone calls, each digitally encoded at 64 kilobits-per-second.  MERIT originally stood for Michigan Educational Research Information Triad. The state of Michigan pitched in $5 million of its own to help its homegrown T1 network get off the ground. Of course, the name and concept of Internet predates the NSFNET. The Internet Protocol dates to 1974, and there were networks connected by IP prior to NSFNET. ARPANET and MILNET we have already mentioned. But I have not been able to find any reference to “the Internet” – a single, all-encompassing world-spanning network of networks – prior to the advent of the three-tiered NSFNET. See this data. Given this trend, how could Quarterman fail to see that the Internet was destined to dominate the world? If the recent epidemic has taught is anything, it is that exponential growth is extremely hard for the human mind to grasp, as it accords with nothing in our ordinary experience.  These figures come from Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996).  See Salus, Casting the Net, 220-221.  Mai-Linh Ton, “Harvard, Connected: The Houses Got Internet,” The Harvard Crimson, May 22, 2017. IAPS, “The Internet in 1990: Domain Registration, E-mail and Networks;” RFC 1462, “What is the Internet;” Resnick and Taylor, The Internet Business Guide, 220.  Resnick and Taylor, The Internet Business Guide, xxxi-xxxiv. Pages 300-302 lay out the pros and cons of the Internet and commercial online services for small businesses.  Statistics from Rosalind Resnick, Exploring the World of Online Services (1993). Pew Research Center, “Online Use,” December 16, 1996. Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more