A Bicycle for the Mind – Prologue

“When man created the bicycle, he created a tool that amplified an inherent ability. That’s why I like to compare the personal computer to a bicycle. …it’s a tool that can amplify a certain part of our inherent intelligence. There’s a special relationship that develops between one person and one computer that ultimately improves productivity on a personal level.”

                — Steve Jobs[1]

In December of 1974, hundreds of thousands of copies of the magazine Popular Electronics rolled off the presses and out to newsstands and mailboxes across the United States. The front cover announced the arrival of the “Altair 8800,” and the editorial just inside explained that this new computer kit could be acquired at a price of less than $400, putting a real computer in reach of ordinary people for the first time. The editor declared that “the home computer age is here—finally.”[2] Promotional hyperbole, perhaps, but many of the magazines’ readers agreed that the Altair marked the arrival of a moment prophesied, anticipated, and long-awaited. They devoured the issue and sent in their orders by the thousands.

But Altair was more than just a successful hobby product. That issue of Popular Electronics’ convinced some readers not only to buy a computer, but to form organizations, whether for-profit or non-profit, that would collectively grow and multiply over the coming years into a massive cultural and commercial phenomenon. Some of those readers achieved lasting fame and fortune: In Cambridge, Massachusetts, the Altair cover issue galvanized a pair of ambitious, computer-obsessed friends into starting a business to write programs for the new machine; they called their new venture Micro-Soft. In Palo Alto, California, it stimulated the formation of a new computer club that drew the attention of a local circuit-building whiz named Steve Wozniak. But the announcement of Altair planted other seeds that are now mostly forgotten. In Peterborough, New Hampshire, it inspired the creation of a new magazine aimed at computer hobbyists, called BYTE. In Denver, it inspired a computer kit maker called the Digital Group to start building a rival machine that would be even better.

The arrival of the Altair catalyzed a reaction that precipitated no less than five distinct, but intertwined, social structures. Three were purely commercial: a hardware industry to make personal computers, a software industry to create applications for them, and retail outlets to sell both. The other two mixed commercial and altruistic motivations: a network of clubs and periodicals to share news and ideas within the hobby community, and a cultural movement to promote the higher meaning of the personal computer as a force for individual empowerment. All of these developments seemed, to a casual observer, to appear from nowhere, ex nihilo. But the reagents that fed into this sudden explosion had been forming for years, waiting only for the right trigger to bring them together.

The first reagent was a pre-existing electronics hobby culture. In the 1970s, hundreds of thousands of people, mostly men, enjoyed dabbling in circuit-building and kit-bashing with electronic components. In the United States, they were served by two flagship publications, the aforementioned Popular Electronics and Radio-Electronics. They provided do-it-yourself instructions (an issue of Popular Electronics from 1970, for example, guided readers on how to build a pair of bookcase stereo speakers, a waa-waa pedal, and an aquarium heater), product reviews, classified ads where readers could offer products or services to the community, and more. Retail stores and mail-order services like Radio Shack and Lafayette Radio Electronics provided the hobbyists with the components and tools they needed for their projects, and a fuzzy penumbra of local clubs and newsletters extended out from these larger institutions. This culture provided the medium for the personal computer’s initial, explosive growth.

But why were the hobbyists so excited by the idea of a “home computer” in the first place? That energy came from the second reagent: a new way of communicating with computers which had created a generation of computer enthusiasts. Anyone involved in data processing in the 1950s and 60s would have experienced computers in the form of with batch-processing centers. The user presented a stack of paper cards representing data and instructions to the computer operators, who put the user’s job in a queue for execution. Depending on how busy the system was, they might have to wait hours to collect their results.

But a new mode of interactive computing, created at defense research labs and elite campuses in the late 1950s and early 1960s, had become widely available in colleges, science and engineering firms, and even some high schools by the mid-1970s. When using a computer interactively, a user sitting at a terminal typed inputs on a keyboard and got an immediate response from the computer, either via a kind of automated typewriter called a teletype, or, less commonly, on a visual display. Users got access to this experience in one of two forms: minicomputers were smaller, less expensive machines than the traditional mainframes, low-cost enough that they could be dedicated to small office or department of a larger organization, and sometimes monopolized by one person at a time. Time-sharing systems provided interactivity by splitting a computer’s processing time among multiple simultaneous users, each seated at their own terminal (sometimes connected to a remote computer via the telephone network). The computer could cycle its attention through each terminal quickly enough to give each user the illusion of having the whole computer at their command.[3] The experience of having the machine under your direct command was entirely addictive, at least for a certain type of user, and thousands of hobbyists who had used computers in this way at work or school salivated at the notion of having it on-demand in their own home.

The microprocessor served as the third reagent in the brew from which the personal computer emerged. In the years just prior to the Altair’s debut, the declining price of integrated circuits and a growing demand for cheap computation had led Intel to create a single chip that could perform all the basic arithmetic and logic functions of a computer. Up to that point, if a business wanted to add electronics to their product—a calculator, a piece of automated industrial equipment, a rocket, or what have you—they would design a circuit, assembled from some mix of custom and off-the-shelf chip, that would provide the capabilities needed for that particular application. But by the early 1970s, the cost of adding a transistor to a chip had gotten so low that it made sense in many cases to buy and program a general-purpose computing chip—a microprocessor—that did more than you really needed, but that could be mass-produced to serve the needs of many different customers at low cost. This had the accidental side-effect of bringing the price of a general-purpose computer down to a point affordable to those electronic hobbyists who had been craving the interactive computing experience.

The final reagent was the explosive growth of American middle-class wealth in the decades after the Second World War. The American economy in the 1970s, despite the setbacks of “stagflation,” was an unprecedented engine of wealth and consumption, and Americans acquired new gadgets and gizmos faster than anyone else in the world. In 1973 they purchased 14.6 million cars and 9.3 million color televisions. Though Americans constituted less than six percent of the world’s population, in 1973 they purchased roughly one-third of all cars produced in the world, and one-half of all color televisions (14.6 million and 9.3 million, respectively).[4] When a Big Mac at McDonald’s would run you sixty-five cents and an average new car in the U.S. cost less than $5,000, the first run of Altairs listed at a price of $395, and when kitted out with accessories would easily cost $1,000 or more.[5] The United States was by far the most promising place on earth to find thousands of people willing and able to throw that kind of money at an expensive toy.

For, despite a lot of rhetorical claims about their potential to boost productivity, home computers had almost no practical value in the 1970s. Hobbyists bought their computers in order to play with them: tinkering with the hardware itself to see how it could be expanded, writing software to see what they could make the hardware do, or playing in a more literal sense with computer games, shared for free within the hobby community or, later, purchased in dedicated hobby shops. It took years for the personal computer to evolve into a capable business machine, and years more to become an unquestioned part of everyday middle-class life.

I came along at a later stage of that evolution, part of a second generation of hobbyists who grew up already familiar with home computers. I still remember a clear, warm day when my father pulled up alongside me and my friends on a then-quiet stretch of road as we rode our bicycles back from the candy store a few miles from my house. He rolled down the passenger side window of his Chevy Nova compact and showed me the treasure trove he had just plundered from the electronics store, a plastic satchel containing three computer games sleeved in colorful cardboard: MicroProse’s F-19 Stealth Fighter and Sierra On-Line’s King’s Quest III and King’s Quest IV. Given the balmy weather and the release dates of those titles, it must have been the late summer or early fall of 1988. I was nine years old.

That roadside revelation changed my life. My father helped me install the games onto the Compaq Portable 286 computer that he no longer needed at work, and I became a PC gamer, forcing me to come to grips with the specialized technical knowledge which that entailed in those years: autoexec.bat files, extended and expanded memory, EGA and VGA graphics, IRQ slots, MIDI channels, and more. I learned that we didn’t have to accept the hardware of the computer as a given: it could be opened up, fiddled with, and improved, with additional memory chips and new sound and video cards. To be seriously interested in computer games at that time was, ipso facto, to become a computer hobbyist.

The eager boy is grown, the tech-savvy father is bent with age, the quiet road courses with traffic, and MicroProse and Sierra still exist only as hollowed out brand-names, empty signifiers. Likewise, the personal computer as the Altair generation created it and as my generation found it has also changed out of all recognition. In the first decade of the twenty-first century, the personal computer mutated into three different kinds of device: into an always-on terminal to the Internet (and especially the World Wide Web), into a pocket communicator and attention-thief, and into a warehouse-scale computer.  

But even before that, and indeed, even before I discovered the joys and frustrations of Sierra adventure games, the nature of the personal computer was already in flux. The hobbyists of the 1970s cherished a dream of free computing in two senses. First, computing made easily accessible: they believed anyone should be able to get their hands on computing power, cheaply and easily. Second, computing unshackled from organizational control, with hardware and software alike under the total and individual control of the user, who would also be the owner. Steve Jobs famously compared the personal computer to a “bicycle for our minds,” and a bicycle carried these same senses of freedom.[6] It made personal transportation easy, inexpensive, and fun, and it was also a machine that could be modified to the owner’s needs and desires without anyone else’s say so.

For the computer hobbyists of the 1970s, who loved computers for their own sake as much as for what they could actually do, these two forms of freedom went hand in hand. The personal computer rewarded these dedicated apprentices with a feeling of almost mystical power – the ability to cast electronic spells. But in the 1980s, their dreams clashed with the realities of the computer’s evolution into a machine for serious business and then into a consumer appliance. Big businesses wanted control, reliability, and predictability from their capital investments in fleets of computers, not user independence and liberation. Consumers had no patience for the demands of wizardry; they wanted ease-of-use and a guided experience. They felt no sense of loss at having computers whose software or hardware was harder to understand and modify, because they had never intended to do so. The assumption of the hobbyists that personal computer owners would have complete mastery over their machine could not survive these changes. Some embraced these changes as a natural side-effect of the expansion of the audience for the personal computer, others felt them as a betrayal of the personal computer’s entire purpose.

In this series, which I’m calling “A Bicycle for the Mind,” my intention is to follow the arc of these transformations; to trace where the personal computer came from and where it went. It is a story of how a hobby machine became a business machine and a consumer device, and how all three then disappeared into our pockets and our data centers. But it is also a story of how, through it all, the personal computer retained traces of its strange beginnings, as an expensive toy for nerds who believed that computer power could set you free.

ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more
Microcomputers – The First Wave: Responding to Altair

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] Don Tarbell: A Life in Personal Computing In August 1968, Stephen Gray, sole proprietor of the Amateur Computer Society (ACS), published a letter in the society newsletter from an enthusiast in Huntsville, Alabama named Don Tarbell. To help other would-be owners of home-built computers, Tarbell offered a mounting board for integrated circuits for sale for $8 from his own hobby-entrepreneur company, Advanced Digital Design. Tarbell worked for Sperry Rand on projects for NASA’s Marshall Space Flight Center, but had gotten hooked on computers through coursework at the University of Alabama at Huntsville, and found the ACS through a contact at IBM.[1] Over the ensuing years, integrated circuits became far cheaper and easier to come by, and building a real home computer on one’s own thus far more feasible (though still a daunting challenge, demanding a wide range of hardware and software skills). In June 1972, Tarbell had mastered enough of those skills to report to the ACS Newsletter that he (at last) had a working computer system, with an 8-bit processor built from integrated circuits, four-thousand bytes of memory, a text editor and a calculator program, a Teletype for input and output, and an eight-track-tape interface for long-term storage. Not long after this report to ACS, Tarbell decamped from Alabama and moved to the Los Angeles area to work for Hughes Aircraft.[2] Don Tarbell with his home-built computer system [Kilobaud: The Small Computer Magazine (May 1977), 132]. Three years after that, in 1975, the arrival of the Altair 8800 kit announced that anyone with the skills to assemble electronics could have the power of a minicomputer in their own home, and thousands heeded the call. A group of 150 of these personal computer hobbyists met in the commons of the apartment complex where Tarbell lived. They had come on Father’s Day for the inaugural meeting of the Southern California Computer Society (SCCS). Half of the participants already owned Altairs. Tarbell took on the position of secretary for the new society, and served on the board of directors. Within a few months, SCCS began producing its own magazine with a full editorial staff, a far more sophisticated operation than the old hand-typed ACS Newsletter; Tarbell eventually became one of its associate editors.[3] But an Altair kit by itself was far from a complete computer system like the one Tarbell had back in 1975. It had a piddling 256 bytes of memory, and no devices for reading or writing data other than lights and switches. Dozens of hobbyists founded their own companies to sell other computer buffs the additional equipment that would answer the deficiencies of their newly-purchased Altairs. Don Tarbell was one of them. Among the major problems was the inability to permanently store or load programs and data. Once you shut off the computer, everything you had entered into it was lost. A standard Teletype terminal came equipped with a paper tape punch and reader, but even a heavily used Teletype could cost $1000. In February 1976, Tarbell offered a much simpler and cheaper solution, the Tarbell cassette interface, a board that would slot into the Altair case and connect the computer to an ordinary cassette recorder, writing or reading data to or from the magnetic tape. Not only was a cassette machine much cheaper than a teletype, cassettes were more durable than paper, could store more data (up to 2200 bits per inch with Tarbell’s controller), and could be rewritten many times. Tarbell’s board sold for $150 assembled, $100 for a kit. He later branched out into floppy disk controllers and an interpreter for the BASIC computer language, and became a minor celebrity of the growing microcomputer scene.[4] Tarbell’s story offers a microcosm of the transition of personal computers, over the course of the 1970s, from an obscure niche hobby to a national industry. Like Hugo Gernsback in radio half a century before, home-computer tinkerers found themselves new roles in a growing hobby business as community-builders, publishers, and small-scale manufacturers. Like Tarbell, the first wave of these entrepreneurs responded directly to the Altair, offering supplemental hardware to offset its weaknesses or offering a more reliable or more capable hobby computer. The First Wave: Responding to Altair The Micro Instrumentation and Telemetry Systems (MITS) Altair came with a lot of potential, but it lay mostly unrealized in the basic kit MITS shipped out. This was partly intentional: the Altair sold on the basis of its exceptionally low price (less than $500), and it simply couldn’t remain so cheap if it had all the features of a full-fledged minicomputer system. Other deficiencies arose by accident, out of the amateurish nature of MITS. The good timing and negotiating skills of Ed Roberts, the company’s owner, had put him at the spearhead of the hobby computer revolution, but no one at his company had exceptional talent in electronics or product design. The Altair took hours to assemble, and the assembled machines often didn’t work. Follow-up accessories came out slowly as MITS technicians struggled to get them working. Tarbell’s cassette interface succeeded because it performed faster and more reliably than MITS’ equivalent. The most urgent need of the hobbyist other than easier input and output was additional memory beyond the scanty 256 bytes included with the base kit: far from enough to run a meaningful program, like a BASIC interpreter. In the spring of 1975, MITS started shipping a 4096-byte (4K) board designed by Roberts, but these boards simply didn’t work.[5] Unsurprisingly, other hobby-entrepreneurs began to step up quickly to fill the gaps. Several of them came from the most famous of the Altair-inspired hobby communities, the Homebrew Computer Club, which met in Silicon Valley and attracted attendees from around the Bay Area. Processor Technology was founded in Berkeley by Homebrew regular and electronics enthusiast Bob Marsh and his reclusive partner, Gary Ingram. In the spring of 1975, they began offering a 4K memory board for the Altair that actually worked. Later, the company came out with its own tape controller and a display board that would make Altair into a TV Typewriter, which they called VDM-1.[6] MITS’ 4K memory board compared to Processor Technology’s. Even without knowing anything about hardware design, it’s easy to see how sloppy the former is compared to the latter. [s100computers.com] Only one “authorized” Altair board maker existed, Cromemco, also located in the Bay Area. Cromemco founders Harry Garland and Roger Melen met as Ph.D. students in electrical engineering at Stanford (and named their company after their dormitory: Crothers Memorial). They contributed articles to Popular Electronics regularly, and found out about Altair while visiting the magazine’s offices in New York. They originally intended to build an interface board for the Altair that could read data from their “Cyclops” digital camera design. Despite the early partnership, no Cromemco board saw the light of day until 1976. Their slow start notwithstanding, Garland and Melen created two products of significance to MITS’ business and to the future of personal computing: the “Dazzler” graphics board and the “Bytesaver” read-only-memory (ROM). Unlike the TV Typewriter or the VDM-1, which could display only text, the Dazzler could paint arbitrary pixels onto the screen from an eight color palette (though only at a resolution of 64 x 64, or up to 128 x 128 in monochrome mode). Less sexy but equally significant, the Bytesaver board stored a program that would be immediately loaded into the Altair memory on power up; prior to that an Altair could do nothing until basic control instructions were keyed in manually to bootstrap it (instructing it, for example, to load another program from paper tape).[7] A 1976 ad for the Cromemco Dazzler [Byte (April 1976), 7] Roberts bristled at the competition from rival card makers. But more aggravating still were the rival computer makers cranking out Altair knock-offs. In 1974, Robert Suding and Deck Bemis had launched Digital Group out of Denver to support the Micro-8. After Altair came out, they decided to make their own, superior computer; Suding happily quit his steady but dull job at IBM to serve as the Woz to Bemis’ Jobs, avant la lettre. Digital Group computers came complete with an eight-kilobyte memory board, a cassette tape controller, and a ROM chip that could boot a program directly from tape. They also had a processor board independent of the backplane into which expansion cards slotted, which meant you could upgrade your processor without replacing any of your other boards. In short, they offered a computer hobbyist’s dream. The catch came in the form of poor quality control and very long waits for delivery, after paying cash up front.[8] Other would-be Altair-killers entered the market from around the country in 1975. Mike Wise, of Bountiful, Utah, created the Sphere, the first hobby computer with an integrated keyboard and display—although production was so limited that, decades later, vintage computer collectors would doubt whether any were actually built. The SWTPC 6800 came out of San Antonio, built by the same Southwest Technical Products Corporation that had sold parts for Don Lancaster’s TV Typewriter. A pair of Purdue graduate students in West Lafayette, Indiana wrote software for the SWTPC under the moniker of Technical Systems Consultants. A few hundred miles to the east, Ohio Scientific of Hudson, Ohio released a Microcomputer Trainer Board that put it, too, on the hobbyist map.[9] The SWTPC 6800. The bluntly rectangular cabinet design with the computer’s name prominent on the faceplate is typical of this era of microcomputers.[Michael Holley] But the real onslaught came in 1976. By that time hobbyists with entrepreneurial ambition had had time to fully absorb the lessons of the Altair, to hone their own skills at computer building, and to adopt new chips like the MOS Technology 6502 or Zilog Z80. The most significant releases of the year were the Apple Computer, MOS Technology KIM-1, IMSAI 8080, Processor Technology Sol-20, and, in the unkindest cut for Roberts, the Z-1 from former ally Cromemco. Most of these computer makers solved the upgrade problem in a more blunt fashion than the Digital Group’s sophisticated swappable boards: they simply copied the card interface protocol (known as the “bus”) of the Altair. Already own an Altair? Buy a Z-1 or Sol-20 and you could put all of the expansion cards for your old computer into the new. Cromemco founder Roger Melen encouraged the community to disassociate this interface from MITS by calling it the S100 bus, not the Altair bus—another twist of the knife.[10] Almost all of these businesses (excepting IMSAI, of whom more shortly) continued to exclusively target electronic hobbyists as their customers. The Z-1 looked just like an upmarket Altair, with a front panel now adorned with slightly nicer switches and lights. The Apple Computer and KIM-1 offered no frills at all, just a bare green printed circuit board festooned with chips and other components. Processor Technology’s Sol-20, inflected with Lee Felsenstein’s vision of a “Tom Swift” terminal for the masses, sported a handsome blue case with integrated keyboard and walnut side panels. This represented substantial progress in usability compared to the company’s first memory boards (which came only as a kit the buyer had to assemble), but the Sol-20 was still marketed via Popular Electronics as a piece of hobby equipment.[11] Software Entrepreneurs In early 1975, a computer hobbyist who wanted a minicomputer-like system of their own had only one low-price option: buy an Altair; then build, or wait for, or scrounge, the additional components that would make it into a functional system. Eighteen months later, abundance had replaced scarcity in the computer hobby hardware market, with many makes, models, and accessories to choose from. But what about software? A working computer consisted of metal, semi-conductor, and plastic, but also a certain quantity of “thought-stuff,” program text that would tell the computer what, exactly, to compute. A large proportion of the hobby community had a minicomputer background. They were accustomed to writing some software themselves and getting the rest (compilers, debuggers, math libraries, games, and more) from fellow users, often through organized community exchanges like the DEC user group program library. So, they expected to get microcomputer programs in the same way, through free exchange with fellow hobbyists. Even in the mainframe world, software was rarely sold independently of a hardware system prior to the 1970s.[12] It came as a shock, then, when, immediately on the heels of Altair, the first software entrepreneurs appeared. Paul Allen and Bill Gates—especially Gates—were roughly a decade younger than most of the early hardware entrepreneurs, at just 22 and 19, respectively. Compare to Ed Roberts of MITS at 33; Lee Felsenstein of Processor Technology, 29; Harry Garland of Cromemco, 28; Chuck Peddle of MOS Technology and Robert Suding of the Digital Group, both 37. These two young men from Seattle had caught the computer bug at the keyboard of their private school’s time-sharing terminal; they had finagled some computer time at a Seattle time-sharing company in exchange for finding bugs, but had no serious work experience that would have immersed them in the practices of the minicomputer world. For all their youth, though, Gates and Allen brimmed with ambition, and when they saw the Altair on the cover of Popular Electronics, they saw a business opportunity. Of course, everyone knew that a computer would need software to be useful, but it was not obvious that anyone would pay for that software. Gates and Allen, having not yet grown accustomed to getting software for free, had an easier time imagining that they would. They also knew that the first program any self-respecting hobbyist would want to get their hands on was a BASIC interpreter, so that they could run the huge existing library of BASIC software (especially games) and begin writing programs of their own. Gates and Allen in 1981. [MOHAI, King County News Photograph Collection, 2007.45.001.30.02, photo by Chuck Hallas] Like Cromemco, Gates and Allen started out as partners with MITS—within days of seeing they Altair cover, they contacted Ed Roberts promising a BASIC interpreter. They delivered in March, despite having no Altair, nor even an 8080 processor—they developed the program on a simulator written by Allen for the DEC PDP-10 at Harvard, where Gates was enrolled as a sophomore. In another debt to DEC, Gates based the syntax on Digital’s popular BASIC-PLUS. Allen moved to Albuquerque soon after, to head a new software division at MITS. Gates eventually followed to nurture their independent software venture, Micro-Soft, though he did not completely abandon Harvard until 1977.[13] Many hobbyists balked at the culture shock of paying for software, and freely exchanged paper tapes of Altair BASIC in defiance of Micro-Soft and MITS, prompting Gates’ famous “Open Letter to Hobbyists,” in February 1976. There he made the case that software writers deserved compensation for their work just as much as hardware builders did, prompting a flurry of amici curiae from various corners of the hobby (with far more weighing in for the defendants than the plaintiff). But, though this controversy is famous for its retrospective echoes of later debates over free software, Gates and Allen rendered the issue irrelevant almost immediately, by switching to a different business model. They began licensing BASIC to computer manufacturers at a flat fee, instead of a royalty on each copy sold. MITS paid $31,200, for example, for the BASIC for a new Altair model using the Motorola 6800 processor. The licensor could choose to charge for the software or not, Micro-Soft didn’t care, but they typically didn’t. This approach bypassed the cultural conflict altogether; BASIC interpreters and other systems software became a bullet point in a list of advertised features for a given piece of hardware rather than a separate item in the catalog.[14] Having a BASIC would let you run programs on your computer; but the other crucial linchpin for an easy-to-use microcomputer system was a program to manage your other programs and data. As faster and denser magnetic storage supplanted paper tape, computer users needed a way to quickly and easily move files between memory and their cassettes or floppy disks. By far the most popular tool for this purpose was CP/M, for Control Program for Microcomputers. CP/M was the creation of Gary Kildall, who got his hands on his first microcomputer directly from the source: Intel. Kildall grew up in Seattle and studied computer science at the University of Washington, where he had a brief run in with Gates and Allen, who at the time were teenagers who worked at a company part-owned by one of his professors, the Computer Center Corporation, in exchange for free computer time. Drafted into the army, Kildall used his connections at the University and his father’s position as a merchant marine instructor to get posted instead to naval officer training, and then a position as a math and computer science teacher at the Naval Postgraduate School in Monterey. After completing his obligations to the Navy in 1972, he stayed on as a civilian instructor.[15] Gary Kildall with his wife Dorothy, in 1978. [Computer History Museum] That same year, Kildall learned about the Intel 4004, and, like so many other computer enthusiasts, became enchanted with the idea of a computer of his own. The most obvious route was to get his hands on Intel’s development kit for the 4004, the SIM4-01, intended to be used by customers to write software for the new chip. So Kildall began talking to people at Intel, and then consulting at Intel, and in exchange for software written for Intel, managed to acquire microprocessor development kits for the 4004, and then later the 8008 and 8080 processors.[16] The most significant piece of software Kildall provided to Intel was PL/M, Programming Language for Microprocessors, which allowed developers to express code in a higher-level syntax that would then be compiled down to the 4004 (or 8008, or 8080) machine language. But you could not write PL/M on a microcomputer, it didn’t have the necessary mass storage interface or software tools; clients were expected to write programs on a minicomputer and then flash the final result onto a ROM chip that would power whatever microprocessor application they had in mind (a traffic light controller, for example, or a cash register.) What Kildall dreamed of was to “self-host” PL/M: that is, to author PL/M programs on the same computer on which they would run. By 1974 he had assembled everything he needed—a Intellec 8/80 development kit (for the 8080), a used hard drive and teletype, a disk controller board built by a friend—except for a program that could load and store the PL/M compiler, the code to be compiled, and the output of the compilation. It was for this reason, to complete his own personal quest, that he wrote CP/M.[17] Only after the fact did he think about selling it, just in time to catch the rising wave of hobby computers. Though Kildall later offered direct sales to users, he began with the same flat-fee license model that Micro-Soft had adopted: Kildall sold the software to Omron, a smart terminal maker, and then to IMSAI for their 8080 computer, each at a fee of $25,000. He incorporated his software business as Intergalactic Digital Research (later just Digital Research) in Pacific Grove, just west of Monterey. Gates visited in 1977 to float the idea of a California merger of the two (relative) giants of microcomputer software, but he and Allen decided to relocate to Seattle instead, leaving behind an intriguing what-if.[18] A CP/M command line interaction via a Tarbell disk controller, showing all the files on disk “A”. [Computer History Museum]      CP/M soon became the de-facto standard operating system for personal computers. Having an operating system made writing application software far easier, because basic routines like reading data from disk could be delegated to system calls instead of being re-written from scratch every time. CP/M in particular stood out for its quality in an often-slapdash hobby industry, and could easily be adapted to new platforms because of Kildall’s innovation of a Basic Input Output System (BIOS), which acted as a translation layer between the operating system and hardware. But what bootstrapped its initial popularity was the IMSAI deal, which attached Digital Research to the rising star in what up to that point had been Altair’s market to lose.[19] Getting Serious? There was one company thinking different about the microcomputer market in 1975: IMSAI, headquartered in San Leandro, California, intended to sell business machines. It had the right name for it, an acronym stuffed wall-to-wall with managerial blather: Information Management Sciences Associates, Inc. William (Bill) Millard was an IBM sales rep, then worked for San Francisco setting up computer systems, and founded IMS Associates to sell his services to companies who needed similar IT help. Bill Millard circa 1983. Provenance unknown. Despite the anodyne name he gave to his company, Millard, too, felt the influence of the ideologies of personal liberation that seemed to rise from San Francisco Bay like a fog. But unlike a Lee Felsenstein or a Bob Albrecht, he though mainly of liberating himself, not others: he was a devotee of Erhard Seminars Training, or est, a self-help seminar which promised paying customers access to an understanding of the world-changing power of their will in just two weekends; according to Erhard, “If you keep saying it \ the way it really is \ eventually your word \ is law in the universe.”[20] Neither Millard nor either of his technical employees (part-time programmer Bruce Van Natta and physicist-cum-electrical engineer Joseph Killian), had any prior interest or experience in home computers; they stumbled into the business almost by accident. Their primary contract, to build a computer networking hub for car dealerships based on a DEC computer, had begun spiraling towards failure. Casting about for some solution, they latched onto the news of Altair’s success: here was an inexpensive alternative to the DEC. When Altair refused to deliver on their timetable, they decided, in late summer of 1975, to clone it instead. And, to get cash flow going to pay their expenses and loans, they would sell their clone direct to consumers as well, while working to complete the big contract. When orders from hobbyists began to pour in, they abandoned the automotive scheme altogether to go all-in on their Altair clone.[21] The IMSAI 8080. It closely resembles the Altair, but with cleaner design and higher quality front-panel components. [Morn] The IMSAI 8080 began shipping in December 1975, at a kit price of $439. Millard cultivated an est culture at the company; employees with the “training” were favored, and total commitment to the work was expected. Some employees considered Millard a “genius or a prophet,” spouses and children of employees showed up after school to help assemble computers. By April, they were doing hundreds of thousands of dollars per month in sales. IMSAI was board-compatible with MITS but made improvements that stood out to the connoisseur: a more efficient internal layout, a cleaner and more professional exterior, and a seriously beefed-up power supply that could support a case fully loaded with expansion boards. These advantages appealed enough to buyers to make it Altair’s top competitor in 1976.[22] But what most set IMSAI apart in 1976 was the fact that it was not led by hobby entrepreneurs, but by a business man who wanted to build business machines. An advertisement in the May 1976 issue of BYTE magazine described the IMSAI as a “rugged, reliable, industrial computer with high commercial-type performance,” as opposed to “Altair’s hobbyist kit” (the IMSAI was of course also sold as a kit), along with obscure allusions to expensive IMSAI business products (Hypercube and Intelligent Disk) that never materialized. This was an odd pretense to put on while advertising in BYTE—a publication featuring articles such as “More to Blinking Lights than Meets the Eye” and “Save Money Using Mini Wire Wrap.”  This is not to say that IMSAI (or its contemporaries) had no commercial customers or applications. Alan Cooper, known later for creating Visual Basic, wrote a basic accounting program for the IMSAI in 1976 called General Ledger. But these applications remained a small minority among the mass of buyers who were computer-curious.[23] In 1977, IMSAI began advertising a “megabyte micro,” another fantasy. Such a powerful and expensive machine could sell in the higher end of the minicomputer market, but not to IMSAI’s actual buyers, hobbyists who were buying kits for less than a thousand dollars out of retail storefronts.IMSAI tried again to attract serious business customers with its second major product, the all-in-one VDP-80, which began shipping in late 1977 with an integrated keyboard, display, and dual disk drives, but it was plagued with quality defects, and lacked any application software for its would-be business customers to use.[24] Those customers did arrive in large numbers in good time, but only after a second wave of all-in-one computers appeared, aimed at the mass-market, and after the emergence of useful application software to run on them.

Read more
From ACS to Altair: The Rise of the Hobby Computer

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] The Early Electronics Hobby A certain pattern of technological development recurred many times in the decades around the turn of the twentieth century: a scattered hobby community, tinkering with a new idea, develops it to the point where those hobbyists can sell it as a product. This sets off a frenzy of small entrepreneurial firms, competing to sell to other hobbyists and early adopters. Finally, a handful of firms grow to the point where they can drive down costs through economies of scale and put their smaller competitors out of business. Bicycles, automobiles, airplanes, and radio broadcasting all developed more or less in this way. The personal computer followed this same pattern; indeed, it marks the very last time that a “high-tech” piece of hardware emerged from this kind of hobby-led development. Since that time, new hardware technology has typically depended on new microchips. That is a capital barrier far too high for hobbyists to surmount; but as we have seen, the computer hobbyists lucked into ready-made microchips created for other reasons, but already suited to their purposes. The hobby culture that created the personal computer was historically continuous with the American radio hobby culture of the early twentieth-century, and, to a surprising degree, the foundations of that culture can be traced back to the efforts of one man: Hugo Gernsback. Gernsback (born Gernsbacher, to well-off German Jewish parents) came to the United States from Luxembourg in 1904 at the age of nineteen, shortly after his father’s death. Already fascinated by electrical equipment, American culture, and the fiction of Jules Verne and H.G. Wells, he started a business, the Electro Importing Company, in Manhattan, that offered both retail and mail-order sales of radios and related equipment. His company catalog evolved into a magazine, Modern Electrics, and Gernsback evolved into a publisher and community builder (he founded the Wireless Association of America in 1909 and the Radio League of America in 1915), a role he relished for the rest of his working life.[1] Gernsback (foreground) giving an over-the-air lecture on the future of radio. From his 1922 book, Radio For All, p. 229. The culture that Gernsback nurtured valued hands-on tinkering and forward-looking futurism, and in fact viewed them as two sides of the same coin. Science fiction (“scientifiction,” as Gernsback called it) writing and practical invention went hand in hand, for both were processes for pulling the future into the present. In a May 1909 article in Modern Electrics, for example, Gernsback opined on the prospects for radio communication with Mars: “If we base transmission between the earth and Mars at the same figure as transmission over the earth, a simple calculation will reveal that we must have the enormous power of 70,000 K. W. to our disposition in order to reach Mars,” and went on to propose a plan for building such a transmitter within the next fifteen or twenty years. As science fiction emerged as its own genre with its own publications in the 1920s (many of them also edited by Gernsback), this kind of speculative article mostly disappeared from the pages of electronic hobby magazines. Gernsback himself occasionally dropped in with an editorial, such as a 1962 piece in Radio-Electronics on computer intelligence, but the median electronic magazine article had a much more practical focus. Readers were typically hobbyists looking for new projects to build or service technicians wanting to keep up with the latest hardware and industry trends.[2] Nonetheless, the electronic hobbyists were always on the lookout for the new, for the expanding edge of the possible: from vacuum tubes, to televisions, to transistors, and beyond. It’s no surprise that this same group would develop an early interest in building computers. Nearly everyone who we find building (or trying to build) a personal or home computer prior to 1977 had close ties to the electronic hobby community. The Gernsback story also highlights a common feature of hobby communities of all sorts. A subset of radio enthusiasts, seeing the possibility of making money by fulfilling the needs of their fellow hobbyists, started manufacturing businesses to make new equipment for hobby projects, retail businesses to sell that equipment, or publishing businesses to keep the community informed on new equipment and other hobby news. Many of these enterprises made little or no money (at least at first), and were fueled as much by personal passion as by the profit motive; they were the work of hobby-entrepreneurs. It was this kind of hobby-entrepreneur who would first make personal computers available to the public. The First Personal Computer Hobbyists The first electronic hobbyist to take an interest in building computers, whom we know of, was Stephen Gray. In 1966, he founded the Amateur Computer Society (ACS), an organization that existed mainly to produce a series of quarterly newsletters typed and mimeographed by Gray himself. Gray has little to say about his own biography in the newsletter or in later reflections on the ACS. He reveals that he worked as an editor of the trade magazine Electronics, that he lived in Manhattan and then Darien, Connecticut, that he had been trying to build a computer of his own for several years, and little else. But he clearly knew the radio hobby world. In the fourth, February 1967, number of his newsletter, he floated the idea of a “Standard Amateur Computer Kit” (SACK) that would provide an economical starting point for new hobbyists, writing that,[3] Amateur computer builders are now much like the early radio amateurs. There’s a lot of home-brew equipment, much patchwork, and most commercial stuff is just too expensive. The ACS can help advance the state of the amateur computer art by designing a standard amateur computer, or at least setting up the specs for one. Although the mere idea of a standard computer makes the true blue home-brew types shudder, the fact is that amateur radio would not be where it is today without the kits and the off-the-shelf equipment available.[4] By the Spring of 1967, Gray had found seventy like-minded members through advertisements in trade and hobby publications, most of them in the United States, but a handful in Canada, Europe, and Japan. We know little about the backgrounds or motivations of these men (and they were exclusively men), but when their employment is mentioned, they are found at major computer, electronics, or aerospace firms; at national labs; or at large universities. We can surmise that most worked with or on computers as part of their day job. A few letter writers disclose prior involvement in hobby electronics and radio, and from the many references to attempts to imitate the PDP-8 architecture, we can also guess that many members had some association with DEC minicomputer culture. It is speculative but plausible to guess that the 1965 release of the PDP-8 might have instigated Gray’s own home computer project and the later creation of the ACS. Its relatively low price, compact size, and simple design may have catalyzed the notion that home computers lay just out of reach, at least for Gray and his band of like-minded enthusiasts. Whatever their backgrounds and motivations, the efforts of these amateurs to actually builda computer proved mostly fruitless in these early years. The January 1968 newsletter reported a grand total of two survey respondents who possessed an actual working computer, though respondents as a whole had sunk an average of two years and $650 on their projects ($6,000 in 2024 dollars). The problem of assembling one’s own computer would daunt even the most skilled electronic hobbyist: no microprocessors existed, nor any integrated circuit memory chips, and indeed virtually no chips of any kind, at least at prices a “homebrewer” could afford. Both of the two complete computers reported in the survey were built from hand-wired transistor logic. One was constructed from the parts of an old nuclear power system control computer, PRODAC IV. Jim Sutherland took the PRODAC’s remains home from his work at Westinghouse after its retirement, and re-dubbed it the ECHO IV (for Electronic Computing Home Operator). Though technically a “home” computer, to borrow an existing computer from work was not a path that most would-be home-brewers could follow. This hardly had the makings of a technological revolution. The other complete “computer,” the EL-65 by Hans Ellenberger of Switzerland, on the other hand, was truly an electronic desktop calculator; it could perform arithmetic ably enough, but could not be programmed. [5] The Emergence of the Hobby-Entrepreneur As integrated circuit technology got better and cheaper, the situation for would-be computer builders gradually improved. By 1971, the first, very feeble, home computer kits appeared on the market, the first signs of Gray’s “SACK.” Though neither used a microprocessor, they took advantage of the falling prices of integrated circuits: the CPU of each consisted of dozens of small chips wired together. The first was the National Radio Institute (NRI) 832, the hardware accompaniment to a computer technician course disseminated by the NRI, and priced at about $500. Unsurprisingly, the designer, Lou Freznel, was a radio hobby enthusiast, and a subscriber to Stephen Gray’s ACS Newsletter. But the NRI 832 is barely recognizable as a functional computer: it had a measly sixteen 8-bit words of read-only memory, configured by mechanical switches (with an additional sixteen bytes of random-access memory available for purchase).[6] OLYMPUS DIGITAL CAMERA " data-medium-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=300" data-large-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=739" loading="lazy" width="1024" height="684" src="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=1024" alt="" class="wp-image-14940">The NRI 832. The switches on the left were used to set the values of the bits in the tiny memory. The banks of lights at the top left and right, showing the binary values of the program counter and accumulator, were the only form of output  [vintagecomputer.net]. The $750 Kenbak-1 that appeared the same year was nominally more capable, with 256 bytes of memory, though implemented with shift-register chips (accessible one bit at a time), not random-access memory. Indeed, the entire machine had a serial-processing architecture, processing only one bit at a time through the CPU, and ran at only about 1,000 instructions per second—very slow for an electronic computer. Like the NRI 832, it offered only switches as input and only a small panel of display lights for showing register contents as output. Its creator, John Blankenbaker, was a radio lover from boyhood before enrolling as an electronics technician in the Navy. He began working on computers in the 1950s, beginning with the Bureau of Standards SEAC. Intrigued by the possibility of bringing a computer home, he tinkered with spare parts for making his own computer for years, becoming his own private ACS. By 1971 he thought he had a saleable device that could be used for teaching programming, and he formed the eponymous “Kenbak” company to sell it.[7] Blankenbaker was the first of the amateur computerists to try to bring his passion to market; the first hobby-entrepreneur of the personal computer. He was not the most successful. I found no records of the sales of the NRI 832, but by Blankenbaker’s own testimony, only forty-four Kenbak-Is were sold. Here were home computer kits readily available at a reasonable price, four years before Altair. Why did they fall flat? As we have seen, most members of the Amateur Computer Society had aimed to make a PDP-8 or something like it; this was the most familiar computer of the 1960s and early 1970s, and provided the mental model for what a home computer could and should be. The NRI 832 and Kenbak-I came nowhere close to the capabilities of a PDP-8, nor were they designed to be extensible or expandable in any way that might allow them to transcend their basic beginnings. These were not machines to stir the imaginative loins of the would-be home computer owner. Hobby-Entrepreneurship in the Open These early, halting steps towards a home computer, from Stephen Gray to the Kenbak-I, took place in the shadows, unknown to all but a few, the hidden passion of a handful of enthusiasts exchanging hand-printed newsletters. But several years later, the dream of a home computer burst into the open in a series of stories and advertisements in major hobby magazines. Microprocessors had become widely available. For those hooked on the excitement of interacting one-on-one with a computer, the possibility of owning their own machine felt tantalizing close. A new group of hobby-entrepreneurs now tried to make their mark by providing computer kits to their fellow enthusiasts, with rather more success than NRI and Kenbak. The overture came in the fall of 1973, with Don Lancaster’s “TV Typewriter,” featured on the cover of the September issue of Radio-Electronics (a Gernsback publication, though Gernsback himself was, by then, several years dead). Lancaster, like most of the people we have met in this chapter, was an amateur “ham” radio operator and electronics tinkerer. Though he had a day job at Goodyear Aerospace in Phoenix, Arizona, he figured out how to make a few extra bucks from his hobby by publishing projects in magazines and selling pre-built circuit boards for those projects via a Texas hobby firm called Southwest Technical Products (SWTPC). The 1973 Radio-Electronics TV Typewriter cover. His TV Typewriter was, of course, not a computer at all, but the excitement it generated certainly derived from its association with computers. One of many obstacles to a useful home computer was the lack of a practical output device: something more useful than the handful of glowing lights that the Kenbak-I sported, but cheaper and more compact than the then-standard computer input/output device, a bulky teletype terminal. Lancaster’s electronic keyboard, which required about $120 in parts, could hook up to an ordinary television and turn it into a video text terminal, displaying up to sixteen lines of thirty-two characters each. Shift-registers continued to be the only cheap form of semiconductor memory, and so that was what Lancaster used for storing the characters to be displayed on screen. Lancaster gave the parts list and schematic to the TV Typewriter away for free, but made money by selling pre-built subassemblies via SWTPC that saved buyers time and effort, and by publishing guidebooks likethe TV Typewriter Cookbook.[8] The next major landmark appeared six months later in a ham radio magazine, QST, named after the three-letter ham code for “calling all stations.” A small ad touted the availability of “THE TOTALLY NEW AND THE VERY FIRST MINI-COMPUTER DESIGNED FOR THE ELECTRONIC/COMPUTER HOBBYIST” with kit prices as low as $440. This was the SCELBI 8-H, the first computer kit based around a microprocessor, in this case the Intel 8008. Its creator, Nat Wadsworth, lived in Connecticut, and became enthusiastic about the microprocessor after attending a seminar given by Intel in 1972, as part of his job as an electrical engineer at an electronics firm. Wadsworth was another ham radio enthusiast, and already enough of a personal computing obsessive to have purchased a surplus DEC PDP-8 at a discount for home use (he paid “only” $2,000, about $15,000 in 2024 dollars). Since his employer did not share his belief in the 8008, he looked for another outlet for his enthusiasm, and teamed up with two other engineers to develop what became the SCELBI-8H (for SCientific ELectronic BIological). Their ads drew thousands of responses and hundreds of orders over the following months, though they ended up losing money on every machine sold.[9] A similar machine appeared several months later, this time as a hobby magazine story, on the cover the July 1974 issue of Radio-Electronics: “Build the Mark-8 Minicomputer,” ran the headline (notice again the “minicomputer” terminology: a PDP-8 of one’s own remained the dream). The Mark-8 came from Jonathan Titus, a grad student from Virginia, who had built his own 8008-based computer and wanted to share the design with the rest of the hobby. Unlike SCELBI, he did not sell it as a complete machine or even a kit: he expected the Radio-Electronics reader to buy and assemble everything themselves. That is not to say that Titus made no money: he followed a hobby-entrepreneur business model similar to Don Lancaster’s, offering an instructional guidebook for $5, and making some pre-made boards available for sale through a retailer in New Jersey, Techniques, Inc. The 1974 Mark-8 Radio-Electronics cover. The SCELBI-8H and Mark-8 looked much more like a “real” minicomputer than the NRI 832 or Kenbak-I. A hobbyist hungry for a PDP-8-like machine of their own could recognize in this generation of machines something edible, at least. Both used an eight-bit parallel processor, not an antiquated bit-serial architecture, came with one kilobyte of random-access memory, and were designed to support textual input/output devices. Most importantly both could be extended with additional memory or I/O cards. These were computers you could tinker with, that could become an ongoing hobby project in and of themselves. A ham radio operator and engineering student in Austin, Texas named Terry Ritter spent over a year getting his Mark-8 fully operational with all of the accessories that he wanted, including an oscilloscope display and cassette tape storage.[10] In the second half of 1974, a community of hundreds of hobbyists like Ritter began to form around 8008-based computers, significantly larger than the tiny cadre of Amateur Computer Society members. In September 1974, Hal Singer began publishing the Mark-8 User Group Newsletter (later renamed the Micro-8 Newsletter) for 8008 enthusiastsout of his office at the Cabrillo High School Computer Center in Lompoc, California. He attracted readers from all across the country: California and New York, yes, but also Iowa, Missouri, and Indiana. Hal Chamberlain started the Computer Hobbyist newsletter two months later. Hobby entrepreneurship expanded around the new machines as well: Robert Suding formed a company in Denver called the Digital Group to sell a packet of upgrade plans for the Mark-8.[11] The first tender blossoms of a hobby computer community had begun to emerge. Then another computer arrived like a spring thunderstorm, drawing whole gardens of hobbyists up across the country and casting the efforts of the likes of Jonthan Titus and Hal Singer in the shade. It, too, came as a response to the arrival of the Mark-8, by a rival publication in search of a blockbuster cover story of their own. Altair Arrives Art Salsberg and Les Solomon, editors at Popular Electronics, were not oblivious to the trends in the hobby, and had been on the lookout for a home computer kit they could put on their cover since the appearance of the TV Typewriter in the fall of 1973. But the July 1974 Mark-8 cover story at rival Radio-Electronics threw a wrench in their plans: they had an 8008-based design of their own lined up, but couldn’t publish something that looked like a copy-cat machine. They needed something better, something to one-up the Mark-8. So, they turned to Ed Roberts. He had nothing concrete, but had pitched Solomon a promise that he could build a computer around the new, more powerful Intel 8080 processor. This pitch became Altair—named, according to legend, by Solomon’s daughter, after the destination of the Enterprise in the Star Trek episode “Amok Time”—and it set the hobby electronics world on fire when it appeared as the January 1975 Popular Electronics cover story. The famous Popular Electronics Altair cover story. Altair, it should be clear by now, was continuous with what came before: people had been dreaming of and hacking together home computers for years, and each year the process became easier and more accessible, until by 1974 any electronics hobbyist could order a kit or parts for a basic home computer for around $500. What set the Altair apart, what made it special, was the sheer amount of power it offered for the price, compared to the SCELBI-8H and Mark-8. The Altair’s value proposition poured gasoline onto smoldering embers, it was an accelerant that transformed a slowly expanding hobby community into a rapidly expanding industry. The Altair’s surprising power derived ultimately from the nerve of MITS founder Ed Roberts. Roberts, like so many of his fellow electronics hobbyists, had developed an early passion for radio technology that was honed into a professional skill by technical training in the U.S. armed forces—the Air Force, in Roberts’ case. He founded Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque with fellow Air Force officer Forrest Mims to sell electronic telemetry modules for model rockets. A crossover hobby-entrepreneur business, this straddled two hobby interests of the founders, but did not prove very profitable. A pivot in 1971 to sell low-cost kits to satiate the booming demand for pocket calculators, on the other hand, proved very successful—until it wasn’t. By 1974 the big semiconductor firms had vertically integrated and driven most of the small calculator makers out of business. For Roberts, the growing hobby interest in home computers offered a chance to save a dying MITS, and he was willing to bet the company on that chance. Though already $300,000 in debt, he secured a loan of $65,000 from a trusting local banker in Albuquerque, in September 1974. With that money, he negotiated a steep volume discount from Intel by offering to buy a large quantity of “ding-and-dent” 8080 processors with cosmetic damage. Though the 8080 listed for $360, MITS got them for $75 each. So, while Wadsworth at SCELBI (and builders assembling their own Mark-8s) were paying $120 for 8008 processors, MITS was paying nearly half that for a far better processor.[12] It is hard to overstate what a substantial leap forward in capabilities the 8080 represented: it ran much faster than the 8008, integrated more capabilities into a single chip (for which the 8008 required several auxiliary chips), could support four times as much memory, and had a much more flexible 40-pin interface (versus the 18 pins on the 8008). The 8080 also referenced a program stack an external memory, while the 8008 had a strictly size-limited on-CPU stack, which limited the software that could be written for it. The 8080 represented such a large leap forward that, until 1981, essentially the entire personal and home computer industry ran on the 8080 and two similar designs: the Zilog Z80 (a processor that was software-compatible with the 8080 but ran at higher speeds), and the MOS Technology 6502 (a budget chip with roughly the same capabilities as the 8080).[13] The release of the Altair kit at a total price of $395 instantly made the 8008-based computers irrelevant. Nat Wadsworth of SCELBI reported that he was “devastated by appearance of Altair,” and “couldn’t understand how it could sell at that price.” Not only was the price right, the Altair also looked more like a minicomputer than anything before it. To be sure, it came standard with a measly 256 bytes of memory and the same “switches and lights” interface as the ancient kits from 1971. It would take quite a lot of additional money and effort to turn into a fully functional computer system. But it came full of promise, in a real case with an extensible card slot system for adding additional memory and input/output controllers. It was by far the closest thing to a PDP-8 that had ever existed at a hobbyist price point—just as the Popular Electronics cover claimed: “World’s First Minicomputer Kit to Rival Commercial Models.” It made the dream of the home computer, long cherished by thousands of computer lovers, seem not merely imminent, but immanent: the digital divine made manifest. And this is why the arrival of the MITS Altair, not of the Kenbak-I or the SCELBI-8H, is remembered as the founding event of the personal computer industry.[14] All that said, even a tricked-out Altair was hardly useful, in an economic sense. If pocket calculators began as a tool for business people, and then became so cheap that people bought them as a toy, the personal computer began as something so expensive and incapable that only people who enjoyed them as a toy would buy them. Next time, we will look at the first years of the personal computer industry: a time when the hobby computer producers briefly flourished and then wilted, mostly replaced and outcompeted by larger, more “serious” firms. But a time when the culture of the typical computer user remained very much a culture of play. Appendix: Micral N, The First Useful Microcomputer There is another machine sometimes cited as the first personal computer: the Micral N. Much like Nat Wadsworth, French engineer François Gernelle was smitten with the possibilities opened up by the Intel 8008 microprocessor, but could not convince his employer, Intertechnique, to use it in their products. So, he joined other Intertechnique defectors to form Réalisation d’Études Électroniques (R2E), and began pursuing some of their erstwhile company’s clients. In December 1972, R2E signed an agreement with one of those clients, the Institut National de la Recherche Agronomique (INRA, a government agronomical research center), to deliver a process control computer for their labs at fraction of the price of a PDP-8. Gernelle and his coworkers toiled through the winter in a basement in the Paris suburb of Châtenay-Malabry to deliver a finished system in April 1973, based on the 8008 chip and offered at a base price of 8,500 francs, about $2,000 in 1973 dollars (one fifth the going rate for a PDP-8).[15] The Micral N was a useful computer, not a toy or a plaything. It was not marketed and sold to hobbyists, but to organizations in need of a real-time controller. That is to say, it served the same role in the lab or factory floor that minicomputers had served for the previous decade. It can certainly be called a microcomputer by dint of its hardware. But the Altair lineage stands out because it changed how computers were used and by whom; the microprocessor happened to make that economically possible, but it did not automatically make every machine into which it was placed a personal computer. The Micral N looks very much like the Altair on the outside, but was marketed entirely differently [Rama, Cc-by-sa-2.0-fr]. Useful personal computers would come, in time. But the demand that existed for a computer in one’s own home or office in the mid-1970s came from enthusiasts with a desire to tinker and play on a computer, not to get serious business done on one. No one had yet written and published the productivity software that would even make a serious home or office computer conceivable. Moreover, it was still far too expensive and difficult to assemble a comprehensive office computer system (with a display, ample memory, and external mass storage for saving files) to attract people who didn’t already love working on computers for their own sake. Until these circumstances  changed, which would take several years, play reigned unchallenged among home computer users. The Micral N is an interesting piece of history, but it is an instructive contrast with the story of the personal computer, not a part of it.

Read more
Coda: Steam’s Last Stand

In the year 1900, automobile sales in the United States were divided almost evenly among three types of vehicles: automakers sold about 1,000 cars powered by internal combustion engines, but over 1,600 powered by steam engines, and almost as many by batteries and electric motors. Throughout all of living memory (at least until the very recent rise of electric vehicles), the car and the combustion engine have gone hand in hand, inseparable. Yet, in 1900, this type claimed the smallest share.For historians of technology, this is the most tantalizing fact in the history of the automobile, perhaps the most tantalizing fact in the history of the industrial age. It suggests a multiverse of possibility, a garden of forking, ghostly might-have-beens. It suggests that, perhaps, had this unstable equilibrium tipped in a different direction, many of the negative externalities of the automobile age—smog, the acceleration of global warming, suburban sprawl—might have been averted. It invites the question, why did combustion win? Many books and articles, by both amateur and professional historians, have been written to attempt to answer this question.However, since the electric car, interesting as its history certainly is, has little to tell us about the age of steam, we will consider here a narrower question—why did steam lose? The steam car was an inflection point where steam power, for so long an engine driving technological progress forward, instead yielded the right-of-way to a brash newcomer. Steam began to look like relic of the past, reduced to watching from the shoulder as the future rushed by. For two centuries, steam strode confidently into one new domain after another: mines, factories, steamboats, railroads, steamships, electricity. Why did it falter at the steam car, after such a promising start?The Emergence of the Steam CarThough Germany had given birth to experimental automobiles in the 1880s, the motor car first took off as successful industry in France. Even Benz, the one German maker to see any success in the early 1890s, sold the majority of its cars and motor-tricycles to French buyers. This was in large part due to the excellent quality of French cross-country roads – though mostly gravel rather than asphalt, they were financed by taxes and overseen by civil engineers, and well above the typical European or American standard of the time. These roads…made it easier for businessmen [in France] to envisage a substantial market for cars… They inspired early producers to publicize their cars by intercity demonstrations and races. And they made cars more practical for residents of rural areas and small towns.[1] The first successful motor car business arose in Paris, in the early 1890s. Émile Levassor and René Panhard (both graduates of the École centrale des arts et manufactures, an engineering institute in Paris), met as managers at a machine shop that made woodworking and metal-working tools. They became the leading partners of the firm and took it into auto making after becoming licensors for the Daimler engine.The 1894 Panhard & Levassor Phaeton already shows the beginning of the shift from horseless carriages with an engine under the seats to the modern car layout with a forward engine compartment. [Jörgens.mi / CC BY-SA 3.0]Before making cars themselves, they looked for other buyers for their licensed engines, which led them to a bicycle maker near the Swiss border, Peugeot Frères Aînés, headed by Armand Peugeot. Though bicycles seem very far removed from cars today, they made many contributions to the early growth of the auto industry. The 1880s bicycle boom (stimulated by the invention of the chain-driven “safety” bicycle) seeded expertise in the construction of high-speed road vehicles with ball bearings and tubular metal frames. Many early cars resembled bicycles with an additional wheel or two, and chain drives for powering the rear wheels remained popular throughout the first few decades of automobile development. Cycling groups also became very effective lobbyists for the construction of smooth cross-country roads on which to ride their machines, literally paving the way for the cars to come.[2]Armand Peugeot decided to purchase Daimler engines from Panhard et Levassor and make cars himself. So, already by 1890 there were two French firms making cars with combustion engines. But French designers had not altogether neglected the possibility of running steam vehicles on ordinary roads. In fact, before ever ordering a Daimler engine, Peugeot had worked on a steam tricycle with the man who would prove to be the most persistent partisan of steam cars in France, Léon Serpollet.A steam-powered road vehicle was not, by 1890, a novel idea. It had been proposed countless times, even before the rise of steam locomotives: James Watt himself had first developed an interest in engines, all the way back in the 1750s, after his friend John Robison suggested building a steam carriage. But those who had tried to put the idea into practice had always found the result wanting. Among the problems were the bulk and weight of the engine and all its paraphernalia (boiler, furnace, coal), the difficulty of maintaining a stoked furnace and controlling steam levels (including preventing the risk of boiler explosion), and the complexity of operating the engine. The only kinds of steam road vehicles to find any success, were those that inherently required a lot of weight, bulk, and specialized training to operate—fire engines and steamrollers—and even those only appeared in the second half of the nineteenth century.[3]Consider Serpollet’s immediate predecessor in steam carriage building, the debauched playboy Comte Albert de Dion. He commissioned two toymakers, George Bouton and Charles Trépardoux to make several small steam cars in the 1880s. These coal-fueled machines took thirty minutes or more to build up a head of steam. In 1894 a larger Dion steam tractor finished first in one of the many cross-country auto races that had begun to spring up to help carmakers promote their vehicles. But the judges disqualified Dion’s vehicle on account of its impracticality: requiring both a driver and a stoker for its furnace, it was in a very literal sense a road locomotive. A discouraged Comte de Dion gave up the steam business, but De Dion-Bouton went on to be a successful maker of combustion automobiles and automobile engines.[4]This De Dion-Bouton steam tractor was disqualified from an auto race in 1894 as impractical.Coincidentally enough, Léon Serpollet and his brother Henri were, like Panhard and Levassor, makers of woodworking machines, and like Peugeot, they came from the Swiss borderlands in East-central France. Also like Panhard and Levassor, Léon studied engineering in Paris, in his case at the Conservatoire national des arts et métiers. But by the time he reached Paris, he and his brother had already concocted the invention that would lead them to the steam car: a “flash” boiler that instantly turned water to steam by passing it through a hot metal tube. This would allow the vehicle to start more quickly (though it still took time to heat the tube before the boiler could be used) and also alleviate safety concerns about a boiler explosion.The most important step to the (relative) success of the Serpollets’ vehicles, however, was when they replaced the traditional coal furnace with a burner for liquid, petroleum-based fuel. This went a long way towards removing the most disqualifying objections to the practicality of steam cars. Kerosene or gasoline weighed less and took up less space than an energy-equivalent amount of coal, and an operator could more easily throttle a liquid-fuel burner (by supplying it with more or less fuel) to control the level of steam.Figure 68: A 1902 Gardner-Serpollet steam car.With early investments from Peugeot and a later infusion of cash from Frank Gardner, an American with a mining fortune, the Serpollets built a business, first selling steam buses in Paris, then turning to small cars. Their steam powerplants generated more power than the combustion vehicles of the time, and Léon promoted them by setting speed records. In 1902, he surpassed seventy-five miles-per-hour along the promenade in Nice. At that time, a Gardner-Serpollet factory in eastern Paris was turning out about 100 cars per year. Though impressive numbers by the standards of the 1890s, already this was becoming small potatoes. In 1901 7,600 cars were produced in France, and 14,000 in 1903; the growing market left Gardner-Serpollet behind as a niche producer. Léon Serpollet made one last pivot back to buses, then died of cancer in 1907 at age forty-eight. The French steam car did not survive him.[5]Unlike in the U.S., steam car sales barely took off in France, and never had parity with the total sales of combustion engine cars from the likes of Panhard et Levassor, Peugeot, and many other makes. There was no moment of balance when it appeared that the future of automotive technology was up for grabs. Why this difference? We’ll have more to say about that later, after we consider the American side of the story.The Acme of the Steam CarAutomobile production in the United States lagged roughly five years behind France; and so it was in 1896 that the first small manufacturers began to appear. Charles and George Duryea (bicycle makers, again), were first off the block. Inspired by an article about Benz’ car, they built their own combustion-engine machine in 1893, and, after winning several races, they began selling vehicles commercially out of Peoria, Illinois in 1896. Several other competitors quickly followed.[6]Steam car manufacturing came slightly later, with the Whitney Motor Wagon Company and the Stanley brothers, both in the Boston area. The Stanleys, twins named Francis and Freelan (or F.E. and F.O.), were successful manufacturers of photographic dry plates, which used a dry emulsion that could be stored indefinitely before use, unlike earlier “wet” plates. They fell into the automobile business by accident, in a similar way to many others—by successfully demonstrating a car they had constructed as a hobby, drawing attention and orders. At an exhibition at the Charles River Park Velodrome in Cambridge, F.E. zipped around the field and up an eighty-foot ramp, demonstrating greater speed and power than any other vehicle present, including an imported combustion-engine De Dion tricycle, which could only climb the ramp halfway.[7]The Stanley brothers mounted in their 1897 steam car.The rights to the Stanley design, through a complex series of business details, ended up in possession of Amzi Barber, the “Asphalt King,” who used tar from Trinidad’s Pitch Lake to pave several square miles worth of roads across the U.S.[8] It was Barber automobiles, sold under the Locomobile brand, that formed the plurality of the 1,600 steam cars sold in the U.S. in 1900: the company sold 5,000 total between 1899 and 1902, at the quite-reasonable price of $600. Locomobiles were quiet and smooth in operation, produced little smoke or odor (though they did breathe great clouds of steam), had the torque required to accelerate rapidly and climb hills, and could smoothly accelerate by simply increasing the speed of the piston, without any shifting of gears. The rattling, smoky, single-cylinder engines of their combustion-powered competitors had none of these qualities.[9]Why then, did the steam car market begin to collapse after 1902? Twenty-seven makes of steam car first appeared in the U.S. in 1899 or 1900, mostly concentrated (like the Locomobile) in the Northeast—New York, Pennsylvania, and (especially) Massachusetts. Of those, only twelve continued making steam cars beyond 1902, and only one—the Lane Motor Vehicle Company of Poughkeepsie, New York—lasted beyond 1905. By that year, the Madison Square Garden car show had 219 combustion models on display, as compared to only twenty electric and nine steam.[10]Barber, the Asphalt King, was interested in cars, regardless of what made them go. As the market shifted to combustion, so did he, abandoning steam at the height of his own sales in 1902. But the Stanleys loved their steamers. Their contractual obligations to Barber being discharged in 1901, they went back into business on their own. One of the longest lasting holdouts, Stanley sold cars well into the 1920s (even after the death of Francis in a car accident in 1918), and the name became synonymous with steam. For that reason, one might be tempted to ascribe the death of the steam car to some individual failing of the Stanleys: “Yankee Tinkerers,” they remained committed to craft manufacturing and did not adopt the mass-production “Fordist” methods of Detroit. Already wealthy from their dry plate business, they did not commit themselves fully to the automobile, allowing themselves to be distracted by other hobbies, such as building a hotel in Colorado so that people could film scary movies there.[11]Some of the internal machinery of a late-model Stanley steamer: the boiler at top left, burner at center left, engine at top right, and engine cutaway at bottom right. [Stanley W. Ellis, Smogless Days: Adventures in Ten Stanley Steamers (Berkeley: Howell-North Books, 1971), 22]But, as we have seen, there were dozens of steam car makers, just as there were dozens of makers of combustion cars; no idiosyncrasies of the Stanley psychology or business model can explain the entire market’s shift from one form of power train to another—if anything it was the peculiar psychology of the Stanleys that kept them making steam cars at all, rather than doing the sensible thing and shifting to combustion. Nor did the powers that be put their finger on the scale to favor combustion engines.[12] How, then, can we explain both the precipitous rise of steam in the U.S. (as opposed to its poor showing in France) as well as its sudden fall?The steam car’s defects were as obvious as its advantages. Most annoying was the requirement to build up a head of steam before you could go anywhere: this took about ten minutes for the Locomobile. Whether starting or going, the controls were complex to manage. Scientific American described the “quite simple” steps required to get a Serpollet car going:A small quantity of alcohol is used to heat the burner, which takes about five minutes; then by the small pump a pressure is made in the oil tank and the cock opened to the burner, which lights up with a blue flame, and the boiler is heated up in two or three minutes. The conductor places the clutch in the middle position, which disconnects the motor from the vehicle and regulates the motor to the starting position, then puts his foot on the admission pedal, starting the motor with the least pressure and heating the cylinders, the oil and water feed working but slightly. When the cylinders are heated, which takes but a few strokes of the piston, the clutch is thrown on the full or wean speed and the feed-pumps placed at a maximum, continuing to feed by hand until the vehicle reaches a certain speed by the automatic feed, which is then regulated as desired.[13]Starting a combustion car of that era also required procedures long-since streamlined away—cranking the engine to life, adjusting the carburetor choke and spark plug timing—but even at the time most writers considered steamers more challenging to operate. Part of the problem was that the boilers were intentionally small (to allow them to build steam quickly and reduce the risk of explosion), which meant lots of hands-on management to keep the steam level just right. Nor had the essential thermodynamic facts changed – internal combustion, operating over a larger temperature gradient, was more efficient than steam. The Model T could drive fifteen to twenty miles on a gallon of fuel, the Stanley could go only ten, not to mention its constant thirst for water, which added another “fueling” requirement.[14]The rather arcane controls of a 1912 Stanley steamer. [Ellis, Smogless Days: Adventures in Ten Stanley Steamers, 26]The steam car overcame these disadvantages to achieve its early success in the U.S. because of the delayed start of the automobile industry there. American steam car makers, starting later, skipped straight to petroleum-fueled burners, bypassing all the frustrations of dealing with a traditional coal-fueled firebox, and banishing all associations between that cumbersome appliance and the steam car.At the same time, combustion automobile builders in the U.S. were still early in their learning curve compared to those in France. A combustion engine was a more complex and temperamental machine than a steam engine, and it took time to learn how to build them well, time that gave steam (and electric) cars a chance to find a market. The builders of combustion engines, as they learned from experience, rapidly improved their designs, while steam cars improved relatively little year over year.Most importantly, they never could get up and running as quickly as a combustion engine. In one of those ironies which history graciously provides to the historian, the very impatience that the steam age had brough forth doomed its final progeny, the steam car. It wasn’t possible to start up a steam car and immediately drive; you always had to wait for the car to be ready. And so drivers turned to the easier, more convenient alternative, to the frustration of steam enthusiasts, who complained of “[t]his strange impatience which is the peculiar quirk of the motorist, who for some reason always has been in a hurry and always has expected everything to happen immediately.”[15] Later Stanleys offered a pilot light that could be kept burning to maintain steam, but “persuading motorists, already apprehensive about the safety of boilers, to keep a pilot light burning all night in the garage proved a hard sell.”[16] It was too late, anyway. The combustion-driven automotive industry had achieved critical mass.The Afterlife of the Steam CarThe Ford Model T of 1908 is the most obvious signpost for the mass-market success of the combustion car. But for the moment that steam was left in the dust, we can look much earlier, to the Oldsmobile “curved dash,” which first appeared in 1901 and reached its peak in 1903, when 4,000 were produced, three times the total output of all steam car makers in that pivotal year of 1900. Ransom Olds, son of a blacksmith, grew up in Lansing, Michigan, and caught the automobile bug as a young man in 1887. Like many contemporaries, he built steamers at first (the easier option), but after driving a Daimler car at the 1893 Chicago World’s Fair, he got hooked on combustion. His Curved Dash (officially the Model R) still derived from the old-fashioned “horseless carriage” style of design, not yet having adopted the forward engine compartment that was already common in Europe by that time. It had a modest single-cylinder, five-horsepower engine tucked under the seats, and an equally modest top speed of twenty miles-per-hour. But it was convenient and inexpensive enough to outpace all of the steamers in sales.[17]The Oldsmobile “Curved Dash” was celebrated in song.The market for steam cars was reduced to driving enthusiasts, who celebrated its near-silent operation (excepting the hiss of the burner), the responsiveness of its low-end torque, and its smooth acceleration without any need for clunky gear-shifting. (There is another irony in the fact that late-twentieth century driving enthusiasts, disgusted by the laziness of automatic transmissions, would celebrate the hands-on responsiveness of manual shifters.) The steam partisan was offended by the unnecessary complexity of the combustion automobile. They liked to point out how few moving parts the steam car had.[18] To imagine the triumph of steam is to imagine a world in which the car remained an expensive hobby for this type of car enthusiast.Several entrepreneurs tried to revive the steamer over the years, most notably the Doble brothers, who brought their steam car enterprise to Detroit in 1915, intent on competing head-to-head with combustion. They strove to make a car that was as convenient as possible to use, with a condenser to conserve water, key-start ignition, simplified controls, and a very fast-starting boiler.But, meanwhile, car builders were steadily scratching off all of the advantages of steam within the framework of the combustion car. Steam cars, like electric cars, did not require the strenuous physical effort to get running that early, crank-started combustion engines did. But by the second decade of the twentieth century, car makers solved this problem by putting a tiny electric car powertrain (battery and motor) inside every combustion vehicle, to bootstrap the starting of the engine. Steam cars offered a smoother, quieter ride than the early combustion rattletraps, but more precisely machined, multi-cylinder engines with anti-knock fuel canceled out this advantage (the severe downsides of lead as an anti-knock agent were not widely recognized until much later). Steam cars could accelerate smoothly without the need to shift gears, but then car makers created automatic transmissions. In the 1970s, several books advocated a return to the lower-emissions burners of steam cars for environmental reasons, but then car makers adopted the catalytic converter.[19]It’s not that a steam car was impossible, but that it was unnecessary. Every year more and more knowledge and capital flowed into the combustion status quo, the cost of switching increased, and no sufficiently convincing reason to do so ever appeared. The failure of the steam car was not due to accident, not due to conspiracy, and certainly not due to any individual failure of the Stanleys, but due to the expansion of auto sales to people who cared more about getting somewhere than about the machine that got them there. Impatient people, born, ironically, of the steam age.

Read more
High-Pressure, Part I: The Western Steamboat

The next act of the steamboat lay in the west, on the waters of the Mississippi basin. The settler population of this vast region—Mark Twain wrote that “the area of its drainage-basin is as great as the combined areas of England, Wales, Scotland, Ireland, France, Spain, Portugal, Germany, Austria, Italy, and Turkey”—was already growing rapidly in the early 1800s, and inexpensive transport to and from its interior represented a tremendous economic opportunity.[1] Robert Livingston scored another of his political coups in 1811, when he secured monopoly rights for operating steamboats in the New Orleans Territory. (It did not hurt his cause that he himself had negotiated the Louisiana Purchase, nor that his brother Edward was New Orleans’ most prominent lawyer.) The Fulton-Livingston partnership built a workshop in Pittsburgh to build steamboats for the Mississippi trade. Pittsburgh’s central position at the confluence of Monangahela and Allegheny made it a key commercial hub in the trans-Appalachian interior and a major boat-building center. Manufactures made there could be distributed up and down the rivers far more easily than those coming over the mountains from the coast, and so factories for making cloth, hats, nails, and other goods began to sprout up there as well.[2] The confluence of river-based commerce, boat-building and workshop know-how made Pittsburgh the natural wellspring for western steamboating. Figure 1: The Fulton-Livingston New Orleans. Note the shape of the hull, which resembles that of a typical ocean-going boat. From Pittsburgh, The Fulton-Livingston boats could ride downstream to New Orleans without touching the ocean. The New Orleans, the first boat launched by the partners, went into regular service from New Orleans to Natchez (about 175 miles to the north) in 1812, but their designs—upscaled versions of their Hudson River boats—fared poorly in the shallow, turbulent waters of the Mississippi. They also suffered sheer bad luck: the New Orleans grounded fatally in 1814, the aptly-named Vesuvius burnt to the waterline in 1816 and had to be rebuilt. The conquest of the Mississippi by steam power would fall to other men, and to a new technology: high-pressure steam. Strong Steam A typical Boulton & Watt condensing engine was designed to operate with steam below the pressure of the atmosphere (about fifteen pounds per square inch (psi)). But the possibility of creating much higher pressures by heating steam well above the boiling point was known for well over a century. The use of so-called “strong steam” dated back at least to Denis Papin’s steam digester from the 1670s. It even had been used to do work, in pumping engines based on Thomas Savery’s design from the early 1700s, which used steam pressure to push water up a pipe. But engine-builders did not use it widely in piston engines until well into the nineteenth century. Part of the reason was the suppressive influence of the great James Watt. Watt knew that expanding high-pressure steam could drive a piston, and laid out plans for high-pressure engines as early as 1769, in a letter to a friend: I intend in many cases to employ the expansive force of steam to press on the piston, or whatever is used instead of one, in the same manner as the weight of the atmosphere is now employed in common fire-engines. In some cases I intend to use both the condenser and this force of steam, so that the powers of these engines will as much exceed those pressed only by the air, as the expansive power of the steam is greater than the weight of the atmosphere. In other cases, when plenty of cold water cannot be had, I intend to work the engines by the force of steam only, and to discharge it into the air by proper outlets after it has done its office.[3] But he continued to rely on the vacuum created by his condenser, and never built an engine worked “by the force of steam only.” He went out of his way to ensure that no one else did either, deprecating the use of strong steam at every opportunity. There was one obvious reason why: high-pressure steam was dangerous. The problem was not the working machinery of the engine but the boiler, which was apt to explode, spewing shrapnel and superheated steam that could kill anyone nearby. Papin had added a safety valve to his digester for exactly this reason. Savery steam pumps were also notorious for their explosive tendencies. Some have imputed a baser motive for Watt’s intransigence: a desire to protect his own business from high-pressure competition. In truth, though, high-pressure boilers did remain dangerous, and would kill many people throughout the nineteenth century. Unfortunately, the best material for building a strong boiler was the most difficult from which to actually construct one. By the beginning of the nineteenth century copper, lead, wrought iron, and cast iron had all been tried as boiler materials, in various shapes and combinations. Copper and lead were soft, cast iron was hard, but brittle. Wrought iron clearly stood out as the toughest and most resilient option, but it could only be made in ingots or bars, which the prospective boilermaker would then have to flatten and form into small plates, many of which would have to be joined to make a complete boiler. Advances in two fields in the decades around 1800 resolved the difficulties of wrought iron. The first was metallurgical. In the late eighteenth century, Henry Cort invented the “puddling” process of melting and stirring iron to oxidize out the carbon, producing larger quantities of wrought iron that could be rolled out into plates of up to about five feet long and a foot wide.[4] These larger plates still had to be riveted together, a tedious and error-prone process, that produced leaky joints. Everything from rope fibers to oatmeal was tried as a caulking material. To make reliable, steam-tight joints required advances in machine tooling. This was a cutting-edge field at the time (pun intended). For example, for most of history craftsmen cut or filed screws by hand. The resulting lack of consistency meant that many of the uses of screws that we take for granted were unknown: one could not cut 100 nuts and 100 bolts, for example, and then expect to thread any pair of them together. Only in the last quarter of the eighteenth centuries did inventors craft sufficiently precise screw-cutting lathes to make it possible to repeatedly produce screws with the same length and pitch. Careful use of tooling similarly made it possible to bore holes of consistent sizes in wrought iron plates, and then manufacture consistently-sized rivets to fit into them, without the need to hand-fit rivets to holes.[5] One could name a few outstanding early contributors to the improvement of machine tooling in the first decades of the nineteenth century Arthur Woolf in Cornwall, or John Hall at the U.S. Harper’s Ferry Armory. But the steady development of improvements in boilers and other steam engine parts also involved the collective action of thousands of handcraft workers. Accustomed to building liquor stills, clocks, or scientific instruments, they gradually developed the techniques and rules of thumb needed for precision metalworking for large machines.[6] These changes did not impress Watt, and he stood by his anti-high-pressure position until his death in 1819. Two men would lead the way in rebelling against his strictures. The first appeared in the United States, far from Watt’s zone of influence, and paved the way for the conquest of the Western waters. Oliver Evans Oliver Evans was born in Delaware in 1755. He first honed his mechanical skills as an apprentice wheelwright. Around 1783, he began constructing a flour mill with his brothers on Red Clay Creek in northern Delaware. Hezekiah Niles, a boy of six, lived nearby. Niles would become the editor of the most famous magazine in America, from which post he later had occasion to recount that “[m]y earliest recollections pointed him out to me as a person, in the language of the day, that ‘would never be worth any thing, because he was always spending his time on some contrivance or another…’”[7] Two great “contrivances” dominated Evans’ adult life. The challenges of the mill work at Red Clay Creek led to his first great idea:  an automated flour mill. He eliminated most of the human labor from the mill by linking together the grain-processing steps with a series of water-powered machines (the most famous and delightfully named being the “hopper boy”). Though fascinating in its own right, for the purposes of our story the automated mill only matters in so far as it generated the wealth which allowed him to invest in his second great idea: an engine driven by high-pressure steam. Figure 2: Evans’ automated flour mill. In 1795, Evans published an account of his automatic mill entitled The Young Mill-Wright and Miller’s Guide. Something of his personality can be gleaned from the title of his 1805 sequel on the steam engine: The Abortion of the Young Steam Engineer’s Guide. A bill to extend the patent on his automatic flour mill failed to pass Congress in 1805, and so he published his Abortion as a dramatic swoon, a loud declaration that, in response this rebuff, he would be taking his ball and going home: His [i.e., Evans’] plans have thus proved abortive, all his fair prospects are blasted, and he must suppress a strong propensity for making new and useful inventions and improvements; although, as he believes, they might soon have been worth the labour of one hundred thousand men.[8] Of course, despite these dour mutterings, he failed entirely to suppress his “strong propensity,” in fact he was in the very midst of launching new steam engine ventures at this time. Like so many other early steam inventors, Evans’ interest in steam began with a dream of a self-propelled carriage. The first tangible evidence that we have of his interest in steam power comes from patents he filed in 1787 which included mention of a “steam-carriage, so constructed to move by the power of steam and the pressure of the atmosphere, for the purpose of conveying burdens without the aid of animal force.” The mention of “the pressure of the atmosphere” is interesting—he may have still been thinking of a low-pressure Watt-style engine at this point.[9] By 1802, however, Evans had a true high-pressure engine of about five horsepower operating at his workshop at Ninth and Market in Philadelphia. He had established himself in that city in 1792, the better to promote his milling inventions and millwright services. He attracted crowds to his shop with his demonstration of the engine at work: driving a screw mill to pulverize plaster, or cutting slabs of marble with a saw. Bands of iron held reenforcing wooden slats against the outside of the boiler, like the rim of a cartwheel or the hoops of a barrel. This curious hallmark testified to Evans’ background as a millwright and wheelwright [10] The boiler, of course, had to be as strong as possible to contain the superheated steam, and Evans’ later designs made improvements in this area. Rather than the “wagon” boiler favored by Watt (shaped like a Conestoga wagon or a stereotypical construction worker’s lunchbox), he used a cylinder. A spherical boiler being infeasible to make or use, this shape distributed the force of the steam pressure as evenly as practicable over the surface. In fact, Evans’ boiler consisted of two cylinders in an elongated donut shape, because rather than placing the furnace below the boiler, he placed it inside, to maximize the surface area of water exposed to the hot air. By the time of the Steam Engineer’s Guide, he no longer used copper braced with wood, he now recommended the “best” (i.e. wrought) iron “rolled in large sheets and strongly riveted together. …As cast iron is liable to crack with the heat, it is not to be trusted immediately in contact with the fire.”[11] Figure 3: Evan’s 1812 design, which he called the Columbian Engine to honor the young United States on the outbreak of the War of 1812. Note the flue carrying heat through the center of the boiler, the riveted wrought iron plates of the boiler, and the dainty proportions of the cylinder, in comparison to that of a Newcomen or Watt engine. Pictured in the corner is the Orukter Amphibolos. Evans was convinced of the superiority of his high-pressure design because of a rule of thumb that he had gleaned from the article “Steam” in the American edition of the Encylopedia Britannica: “…whatever the present temperature, an increase of 30 degrees doubles the elasticity and the bulk of water vapor.”[12] From this Evans concluded that heating steam to twice the boiling point (from 210 degrees to 420), would increase its elastic force by 128 times (since a 210 degree increase in temperature would make seven doublings). This massive increase in power would require only twice the fuel (to double the heat of the steam). None of this was correct, but it would not be the first or last time that faulty science would produce useful technology.[13] Nonetheless, the high-pressure engine did have very real advantages. Because the power generated by an engine was proportional to the area of the piston times the pressure exerted on that piston, for any given horsepower, a high-pressure engine could be made much smaller than its low-pressure equivalent. A high-pressure engine also did not require a condenser: it could vent the spent steam directly into the atmosphere. These factors made Evans’ engines smaller, lighter, and simpler and less expensive to build. A non-condensing high-pressure engine of twenty-four horsepower weighed half a ton and had a cylinder nine-inches across. A traditional Boulton & Watt style engine of the same power had a cylinder three times as wide and weighed four times as much overall.[14]   Such advantages in size and weight would count doubly for an engine used in a vehicle, i.e. an engine that had to haul itself around. In 1804 Evans sold an engine that was intended to drive a New Orleans steamboat, but it ended up in a sawmill instead. This event could serve as a metaphor for his relationship to steam transportation. He declared in his Steam Engineer’s Guide that: The navigation of the river Mississippi, by steam engines, on the principles here laid down, has for many years been a favourite object with the author and among the fondest wishes of his heart. He has used many endeavours to produce a conviction of its practicability, and never had a doubt of the sufficiency of the power.[15]   But steam navigation never got much more than his fondest wishes. Unlike a Fitch or a Rumsey, the desire to make a steamboat did not dominate his dreams and waking hours alike. By 1805, he was a well-established man of middle years. If he had ever possessed the Tookish spirit required for riverboat adventures, he had since lost it. He had already given up on the idea of a steam carriage, after failing to sell the Lancaster Turnpike Company on the idea in 1801. His most grandiosely named project, the Orukter Amphibolos, may briefly have run on wheels en route to serve as a steam dredge in the Philadelphia harbor. If it functioned at all, though, it was by no means a practical vehicle, and it had no sequel. Evans’ attention had shifted to industrial power, where the clearest financial opportunity lay—an opportunity that could be seized without leaving Philadelphia. Despite Evans’ calculations (erroneous, as we have said), a non-condensing high-pressure engine was somewhat less fuel-efficient than an equivalent Watt engine, not more. But because of its size and simplicity, it could be built at half the cost, and transported more cheaply, too. In time, therefore, the Evans-style engine became very popular as a mill or factory engine in the capital- and transportation-poor (but fuel-rich) trans-Appalachian United States.[16] In 1806, Evans began construction on his “Mars Works” in Philadelphia, to serve market for engines and other equipment. Evans engines sprouted up at sawmills, flour mills, paper factories, and other industrial enterprises across the West. Then, in 1811, he organized the Pittsburgh Steam Engine Company, operated by his twenty-three-year-old son George, to reduce transportation costs for engines to be erected west of the Alleghenies.[17] It was around that nexus of Pittsburgh that Evans’ inventions would find the people with the passion to put them to work, at last, on the rivers. The Rise of the Western Steamboat The mature Mississippi paddle steamer differed from its Eastern antecedents in two main respects. First, in its overall shape and layout: a roughly rectangular hull with a shallow draft, layer cake decks, and machinery above the water, not under it. This design was better adapted to an environment where snags and shallows presented a much greater hazard than waves and high winds. Second, in the use of a high-pressure engine, or engines, with a cylinder mounted horizontally along the deck. Many historical accounts attribute both of these essential developments to a keelboatman named Henry Miller Shreve. Economic historian Louis Hunter effectively demolished this legend in the 1940s, but more recent writers (for example Shreve’s 1984 biographer, Edith McCall), have continued to perpetuate it. In fact, no one can say with certainty where most of these features came from because no one bothered to document their introduction. As Hunter wrote: From the appearance of the first crude steam vessels on the western waters to the emergence of the fully evolved river steamboat a generation later, we know astonishingly little of the actual course of technological events and we can follow what took place only in its broad outlines. The development of the western steamboat proceeded largely outside the framework of the patent system and in a haze of anonymity.[18] Some documents came to light in the 1990s, however, that have burned away some of the “haze,” with respect to the introduction of high-pressure engines.[19] the papers of Daniel French reveal that the key events happened in a now-obscure place called Brownsville (originally known as Redstone), about forty miles up the Monongahela from that vital center of western commerce, Pittsburgh. Brownsville was the point where anyone heading west on the main trail over the Alleghenies—which later became part of the National Road—would first reach navigable waters in the Mississippi basin. Henry Shreve grew up not far from this spot. Born in 1785 to a father who had served as a Colonel in the Revolutionary War, he grew up on a farm near Brownsville on land leased from Washington: one of the general’s many western land-development schemes.[20] Henry fell in love with the river life, and in by his early twenties had established himself with his own keelboat operating out of Pittsburgh. He made his early fortune off the fur trade boom in St. Louis, which took off after Lewis and Clark returned with reports of widespread beaver activity on the Missouri River.[21] In the fall of 1812, a newcomer named Daniel French arrived in Shreve’s neighborhood—a newcomer who already had experience building steam watercraft, powered by engines based on the designs of Oliver Evans. French was born in Connecticut 1770, and started planning to build steamboats in his early 20s, perhaps inspired by the work of Samuel Morey, who operated upstream of him on the Connecticut River. But, discouraged from his plans by the local authorities, French turned his inventive energies elsewhere for a time. He met and worked with Evans in Washington, D.C., to lobby Congress to extend the length of patent grants, but did not return to steamboats until Fulton’s 1807 triumph re-energized him. At this point he adopted Evans’ high-pressure engine idea, but added his own innovation, an oscillating cylinder that pivoted on trunions as the engine worked. This allowed the piston shaft to be attached to the stern wheel with a simple (and light) crank, without any flywheel or gearing. The small size of the high-pressure cylinder made it feasible to put the cylinder in motion. In 1810, a steam ferry he designed, for a route from Jersey City to Manhattan, successfully crossed and recrossed the North (Hudson) River at about six miles per hour. Nonetheless, Fulton, who still held a New York state monopoly, got the contract from the ferry operators.[22] French moved to Philadelphia and tried again, constructing the steam ferry Rebecca to carry passengers across the Delaware. She evidently did not produce great profits, because a frustrated French moved west again in the fall of 1812, to establish a steam-engine-building business at Brownsville.[23] His experience with building high-pressure steamboats—simple, relatively low-cost, and powerful—had arrived at the place that would benefit most from those advantages, a place, moreover, where the Fulton-Livingston interests held no legal monopoly. News about the lucrative profits of the New Orleans on the Natchez run had begun to trickle back up the rivers. This was sufficient to convince the Brownsville notables—Shreve among them—to put up $11,000 to form the Monongahela and Ohio Steam Boat Company in 1813, with French as their engineer. French had their first boat, Enterprise, ready by the spring of 1814. Her exact characteristics are not documented, but based on the fragmentary evidence, she seems in effect to have been a motorized keelboat: 60-80’ long, about 30 tons, and equipped with a twenty-horsepower engine. The power train matched that of French’s 1810 steam ferry, trunions and all.[24] The Enterprise spent the summer trading along the Ohio between Pittsburgh and Louisville. Then, in December, she headed south with a load of supplies to aid in the defense of New Orleans. For this important voyage into waters mostly unknown to the Brownsville circle, they called on the experienced keelboatman, Henry Shreve. Andrew Jackson had declared martial law, and kept Shreve and the Enterprise on military dutyin New Orleans. With Jackson’s aid, Shreve dodged the legal snares laid for him by the Fulton-Livingston group to protect their New Orleans monopoly. Then in May, after the armistice, he brough the Enterprise on a 2,000-mile ascent back to Brownsville, the first steamboat ever to make such a journey. Shreve became an instant celebrity. He had contributed to a stunning defeat for the British at New Orleans, carried out an unprecedent voyage. Moreover, he had confounded the monopolists: their attempt to assert exclusive rights over the commons of the river was deeply unpopular west of the Appalachians. Shreve capitalized on his new-found fame to raise money for his own steamboat company in Wheeling, Virginia. The Ohio at Wheeling ran much deeper than the Monongahela at Brownsvile, and Shreve would put this depth to use: he had ambitions to put a French engine into a far larger boat than the Enterprise. Spurring French to scale up his design was probably Shreve’s largest contribution to the evolution of the western steamboat. French dared not try to repeat his oscillating cylinder trick on the larger cylinder that would drive Shreve’s 100-horsepower, 400-ton two-decker. Instead, he fixed the cylinder horizontally to the hull, and then attached the piston rod to a connecting rod, or “pitman,” that drove the crankshaft of the stern paddle wheel. He thus transferred the oscillating motion from the piston to the pitman, while keeping the overall design simple and relatively low cost.[25] Shreve called his steamer Washington, after his father’s (and his own) hero. Her maiden voyage in 1817, however, was far from heroic. Evans would have assured French that the high-pressure engine carried little risk: as he wrote in the Steam Engineer’s Guide, “we know how to construct [boilers] with a proportionate strength, to enable us to work with perfect safety.”[26] Yet on her first trip down the Ohio, with twenty-one passengers aboard, the Washington’s boiler exploded, killing seven passengers and three crew. The blast threw Shreve himself into the river, but he did not suffer serious harm.[27] Ironically, the only steamboat built by the Evans family, the Constitution (née Oliver Evans) suffered a similar fate in the same year, exploding and killing eleven on board. Despite Evans’ confidence in their safety, boiler accidents continued to bedevil steamboats for decades. Though the total numbers killed was not enormous—about 1500 dead across all Western rivers up to 1848—each event provided an exceptionally grisly spectacle. Consider this lurid account of the explosion of the Constitution: One man had been completely submerged in the boiling liquid which inundated the cabin, and in his removal to the deck, the skin had separated from the entire surface of his body. The unfortunate wretch was literally boiled alive, yet although his flesh parted from his bones, and his agonies were most intense, he survived and retained all his consciousness for several hours. Another passenger was found lying aft of the wheel with an arm and a leg blown off, and as no surgical aid could be rendered him, death from loss of blood soon ended his sufferings. Miss C. Butler, of Massachusetts, was so badly scalded, that, after lingering in unspeakable agony for three hours, death came to her relief.[28] In response to continued public outcry for an end to such horrors, Congress eventually stepped in, passing acts to improve steamboat safety in 1838 and 1852. Meanwhile, Shreve was not deterred by the setback. The Washington itself did not suffer grievous damage, so he corrected a fault in the safety valves and tried again. Passengers were understandably reluctant for an encore performance, but after the Washington made national news in 1817 with a freight passage from New Orleans to just twenty-five days, the public quickly forgot and forgave. A few days later, a judge in New Orleans refused to consider a suit by the Fulton-Livingston interests against Shreve, effectively nullifying their monopoly.[29] Now all comers knew that steamboats could ply the Mississippi successfully, and without risk of any legal action. The age of the western steamboat opened in earnest. By 1820, sixty-nine steamboats could be found on western rivers, and 187 a decade after that.[30] Builders took a variety of approaches to powering these boats: low-pressure engines, engines with vertical cylinders, engines with rocking beams or fly wheels to drive the paddles. Not until the 1830s did a dominant pattern take hold, but when it did it, it was that of the Evans/French/Shreve lineage, as found on the Washington: a high-pressure engine with a horizontal cylinder driving the wheel through an oscillating connecting rod.[31] Landscape " data-medium-file="https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L" data-large-file="https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI?w=739" loading="lazy" width="1024" height="840" src="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9" alt="" class="wp-image-14432" srcset="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9 1024w, https://cdn.accountdigital.net/FrglBM_683opK7_ejlnGd3iJ8vBT 150w, https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L 300w, https://cdn.accountdigital.net/FoNGy7IhZD2VDImZqLwjumC3NT84 768w, https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI 1247w" sizes="(max-width: 1024px) 100vw, 1024px">Figure 4: A Tennessee river steamboat from the 1860s. The distinctive features include a flat-bottomed hull with very little freeboard, a superstructure to hold passengers and crew, and twin smokestacks. The western steamboat had achieved this basic form by the 1830s and maintained it into the twentieth century. The Legacy of the Western Steamboat The Western steamboat was a product of environmental factors that favored the adoption of a shallow-drafted boat with a relatively inefficient but simple and powerful engine: fast, shallow rivers; abundant wood for fuel along the shores of those rivers; and the geographic configuration of the United States after the Louisiana Purchase, with a high ridge of mountains separating the coast from a massive navigable inland watershed. But, Escher-like, the steamboat then looped back around to reshape the environment from which it had emerged. Just as steam-powered factories had, steam transport flattened out the cycles of nature, bulldozing the hills and valleys of time and space. Before the Washington’s journey, the shallow grade that distinguished upstream from downstream dominated the life of any traveler or trader in the Mississippi. Now goods and people could move easily upriver, in defiance the dictates of gravity.[32] By the 1840s, steamboats were navigating well inland on other rivers of the West as well: up the Tombigbee, for example, over 200 miles inland to Columbus, Mississippi.[33] What steamboats alone could not do to turn the western waters into turnpike roads, Shreve and others would impose on them through brute force. Steamboats frequently sank or took major damage from snags or “sawyers”: partially submerged tree limbs or trunks that obstructed the water ways. In some places, vast masses of driftwood choked the entire river. Beyond Natchitoches, the Red River was obstructed for miles by an astonishing tangle of such logs known as the Great Raft.[34] Figure 5: A portrait of Shreve of unknown date, likely the 1840s. The scene outside the window reveals one of his snagboats, a frequently used device in nineteenth century portraits of inventors. Not only commerce was at stake in clearing the waterways of such obstructions; steamboats would be vital to any future war in the West. As early as 1814, Andrew Jackson had put Shreve’s Enterprise to good use, ferrying supplies and troops around the Mississippi delta region.[35] With the encouragement of the Monroe administration, therefore, Congress stepped in with a bill in 1824 to fund the Army’s corps of engineers to improve the western rivers. Shreve was named superintendent of this effort, and secured federal funds to build snagboats such as the Heliopolis, twin-hulled behemoths designed to drive a snag between its hulls and then winch it up onto the middle deck and saw it down to size. Heliopolis and its sister ships successfully cleared large stretches of the Ohio and Mississippi.[36] In 1833, Shreve embarked on the last great venture of his life: an assault on the Great Raft itself. It took six years and a flotilla of rafts, keelboats and steamboats to complete the job, including a new snagboat, Eradicator, built specially for the task.[37] The clearing of waterways, technical advancements in steamboat design, and other improvements (such as the establishment of fuel depots, so that time was not wasted stopping to gather wood), combined to drive travel times along the rivers down rapidly. In 1819, the James Ross completed the New Orleans to Louisville passage in sixteen-and-a-half days. In 1824 the President covered the same distance in ten-and-a-half days, and in 1833 the Tuscorora clocked a run of seven days, six hours. These ever-decreasing record times translated directly into ever-decreasing shipping rates. Early steamboats charged upstream rates equivalent to those levied by their keelboat competitors: about five dollars per hundred pounds carried from New Orleans to Louisville. By the early 1830s this had dropped to an average of about sixty cents per 100 pounds, and by the 1840s as low as fifteen cents.[38] By decreasing the cost of river trade, the steamboat cemented the economic preeminence of New Orleans. Cotton, sugar, and other agricultural goods (much of it produced by slave labor) flowed downriver to the port, then out to the wider world; manufactured goods and luxuries like coffee arrived from the ocean trade and were carried upriver; and human traffic, bought and sold at the massive New Orleans slave market, flowed in both directions.[39] In 1820 a steamboat arrived in New Orleans about every other day. By 1840 the city averaged over four arrivals a day; by 1850, nearly eight.[40] The population of the city burgeoned to over 100,000 by 1840, making it the third-largest in the country. Chicago, its big-shouldered days still ahead of it, remained a frontier outpost by comparison, with only 5,000 residents. Figure 6: A Currier & Ives lithograph of the New Orleans levee. This represents a scene from the late nineteenth century, way past the prime of New Orleans’ economic dominance, but still shows a port bustling with steamboats. But both New Orleans and the steamboat soon lost their dominance over the western economy. As Mark Twain wrote: Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature.[41] Several forces connived in the murder of the Mississippi steamboat, but a close cousin lurked among the conspirators: another form of transportation enabled by the harnessing of high-pressure steam. The story of the locomotive takes us back to Britain, and the dawn of the nineteenth century.

Read more
The Computer as a Communication Device

Over the first half of the 1970s, the ecology of computer networking diversified from its original ARPANET ancestry along several dimensions. ARPANET users discovered a new application, electronic mail, which became the dominant activity on the network. Entrepreneurs spun-off their own ARPANET variants to serve commercial customers. And researchers from Hawaii to l’Hexagone developed new types of network to serve needs or rectify problems not addressed by ARPANET. Almost everyone involved in this process abandoned the ARPANET’s original stated goal of allowing computing hardware and software to be shared among a diverse range of research sites, each with its own specialized resources. Computer networks became primarily a means for people to connect to one another, or to remote systems that acted as sources or sinks for human-readable information, i.e. information databases and printers. This was a possibility foreseen by Licklider and Robert Taylor, though not what they had intended when they launched their first network experiments. Their 1968 article,”The Computer as a Communication Device” lacks the verve and timeless quality of visionary landmarks in the history of computing such as Vannevar Bush’s “As We May Think” or Turing’s “Computing Machinery and Intelligence.” Nonetheless, it provides a rather prescient glimpse of a social fabric woven together by computer systems. Licklider and Taylor described a not-to-distant future in which1 You will not send a letter or a telegram; you will simply identify the people whose files should be linked to yours and the parts to which they should be linked-and perhaps specify a coefficient of urgency. You will seldom make a telephone call; you will ask the network to link your consoles together. …Available within the network will be functions and services to which you subscribe on a regular basis and others that you call for when you need them. In the former group will be investment guidance, tax counseling, selective dissemination of information in your field of specialization, announcement of cultural, sport, and entertainment events that fit your interests, etc. The first and most important component of this computer-mediated future – electronic mail – spread like a virus across ARPANET in the 1970s, on its way to taking over the world. Email To understand how electronic mail developed on ARPANET, you need to first understand an important change that overtook the network’s computer systems in the early 1970s.  When ARPANET was first conceived in the mid-1960s, there was almost no commonality among the hardware and operating software running at each ARPA site. Many sites centered on custom, one-off research systems, such Multics at MIT, the TX-2 at Lincoln Labs, and the ILLIAC IV, under construction at the University of Illinois. By 1973, on the other hand, the landscape of computer systems connected to the network had acquired a great deal of uniformity, thanks to the wild success of Digital Equipment Corporation (DEC) in penetrating the academic computing market.2 DEC designed the PDP-10, released in 1968, to provide a rock-solid time-sharing experience for a small organization, with an array of tools and programming languages built-in to aid in customization. This was exactly what academic computing centers and research labs were looking for at the time. Look at all the PDPs! BBN, the company responsible for overseeing the ARPANET, then made the package even more attractive by creating the Tenex operating system, which added paged virtual memory to the PDP-10. This greatly simplified the management and use of the system, by making it less important to exactly match the set of running programs to the available memory space. BBN supplied the Tenex software free-of-charge to other ARPA sites, and it soon became the dominant operating system on the network. But what does all of this have to do with email? Electronic messaging was already familiar to users of time-sharing systems, most of which offered some kind of mailbox program by the late 1960s. They provided a form of digital inter-office mail; their reach extended only to other users of the same computer system. The first person to take advantage of the network to transfer mail from one machine to another was Ray Tomlinson, a BBN engineer and one of the authors of the Tenex software. He had already written a SNDMSG program for sending mail to other users on a single Tenex system, and a CPYNET program for sending files across the network. It required only a leap of imagination for him to see that he could combine the two to create a networked mail program. Previous mail programs had only required a user name to indicate the recipient, so Tomlinson came up with the idea of combining that local user name and the (local or remote) host name with an @ symbol3, to create an email address that was unique across the entire network. Ray Tomlinson in later years, with his signature “at” sign Tomlinson began testing his new program locally in 1971, and in 1972 his networked version of SNDMSG was bundled into the Tenex release, allowing Tenex mail to break the bonds of a single site and spread across the network. The plurality of machines running Tenex made Tomlinson’s hybrid program available instantly to a large proportion of ARPANET users, and it became an immediate success. It did not take long for ARPA’s leaders to integrate email into the core of their working life. Stephen Lukasik, director of ARPA, was an early adopter, as was Larry Roberts, still head of the agency’s computer science office. The habit inevitably spread to their subordinates, and soon email became a basic fact of life of the culture of ARPANET. Tomlinson’s mail software spawned a variety of imitations and elaborations from other users looking to improve on its rudimentary functionality. Most of the early innovation focused on the defects of the mail reading program. As email spread beyond a single computer, the volume of mail received by heavy users scaled with the size of the network, and the traditional approach of treating the mailbox as a raw text file was no longer effective. Larry Roberts himself, unable to deal effectively with the deluge of incoming messages, wrote his own software to manage his inbox called RD. By the mid-1970s, however, the most popular program by far was MSG, written by John Vittal of USC. We take for granted the ability to press a single button to fill out the title and recipient of outgoing message based on an incoming one. But it was Vittal’s MSG that first provided this killer “answer” feature in 1975; and it, too, was a Tenex program. The diversity of efforts led to a need for standards. This marked the first, but far from the last, time that the computer networking community would have to develop ex post facto standards. Unlike the basic protocols for ARPANET, a variety of email practices already existed in the wild prior to any standard setting. The inevitable result was controversy and political struggle, centering around the main email standard documents, RFC 680 and 720. In particular, non-Tenex users expressed a certain prickly resentment about the Tenex-centric assumptions built into the proposals. The conflict never grew terribly hot – everyone on ARPANET in the 1970s was still part of the same, relatively small, academic community and the differences to be reconciled were not large. But it provided a taste of larger struggles to come. The sudden success of email represented the most important development of the 1970s in the application layer of the network, the level most abstracted from the physical details of the network’s layout. At the same time, however, others had set out to redifine the foundational “link” layer, where bits flowed from machine to machine. ALOHA In 1968, Norman Abramson arrived at the University of Hawaii from California to serve a combined appointment as electrical engineering and computer science professor. The University he joined consisted of a main campus in Oahu as well as a secondary Hilo campus, and several other community colleges and research sites spread across Oahu, Kauai, Maui, and Hawaii. In between lay hundreds of miles of water and mountainous terrain. A brawny IBM 360/65 powered computer operations at the main campus, but ordering up an AT&T dedicated line to link a terminal to it from one of the community colleges was not so simple a matter as on the mainland. Abramson was an expert in radar systems and information theory who did a stint as an engineer for Hughes Aircraft in Los Angeles. This new environment, with all the physical challenges it presented to wireline communications, seems to have inspired Abramson to a new idea – what if radio were actually a better way of connecting computers than the phone system, which after all was designed with the needs of voice, not data, in mind? Abramson secured funding from Bob Taylor at ARPA to test this idea, with a system he called ALOHAnet. In its initial incarnation, it was not a computer network at all, but rather a medium for connecting remote terminals to a single time-sharing system, designed for the IBM machine at the Oahu campus. Like ARPANET, it had a dedicated minicomputer for processing packets sent and received by the 360/65 – Menehune, the Hawaiian equivalent of the IMP. ALOHAnet, however, dealt away with all the intermediate point-to-point routing used by ARPANET to get packets from one place to another. Instead any terminal wishing to send a message simply broadcast it into the ether in the allotted transmission frequency. ALOHAnet in its full state of development later in the 1970s, with multiple computers The traditional way for a radio engineer to handle a shared transmission band like this would have been to carve it up into time or frequency-based slots, and assign each terminal to its own slot. But to handle hundreds of terminals in such a scheme would mean limiting each to a small fraction of the available bandwidth, even though only a few might be in active use at any given moment. Instead, Abramson decided to do nothing to prevent more than one terminal from sending at the same time. If two or more messages overlapped they would become garbled, but the central computer would detect this via error-correcting codes, and would not acknowledge those packets. Failing to receive their acknowledgement, the sender(s) would try again after some random interval. Abramson calculated that this simple protocol could sustain up to a few hundred simultaneously active terminals, whose numerous collisions would still leave about 15% of the usable bandwidth. Beyond that, though, his calculations showed that the whole thing would collapse into a chaos of noise. The Office Of The Future Abramson’s “packet broadcasting” concept did not make a huge splash, at first. But it found new life a few years later, back on the mainland. The context was Xerox’s new Palo Alto Research Center (PARC), opened in 1970 just across from Stanford University, in a region recently dubbed “Silicon Valley.” Some of Xerox’s core xerography patents stood on the verge of expiration, and  the company risked being trapped by its own success, unable or unwilling to adapt to the rise of computing and integrated circuits. Jack Goldman, head of research for Xerox, had convinced the bigwigs back East that a new lab – distanced from the influence of HQ, nestled in an attractive climate, and with premium salaries on offer – would attract the talent needed to keep Xerox’s edge, by designing the information architecture of the future. PARC certainly succeeded in attracting top computer science talent, due not only to the environment and the generous pay, but also the presence of Robert Taylor, who had set the ARPANET into motion as head of ARPA’s Information Processing Technology Office in 1966. Robert Metcalfe, a prickly and ambitious young engineer and computer scientist from Brooklyn, was one of many wooed to PARC via an ARPA connection. He joined the lab in June 1972 after working part-time for ARPA a a Harvard graduate student, building the interface to connect MIT to the network. Even after joining PARC, he continued to work as an ARPANET ‘facilitator’, traveling around the country to help new sites get started on the network, and on the preparations for ARPA’s coming out party at the 1972 International Conference on Computer Communications. Among the projects percolating at PARC when Metcalfe arrived was a plan by Taylor to link dozens, or even hundreds, of small computers via a local network.  Year after year, computers continued to decrease in price and size, as if bending to the indomitable will of Gordon Moore. The forward-looking engineers at PARC foresaw a not-far-distant future when every office worker would have his own computer. To that end, they designed and built a personal computer called Alto, a copy of which would be supplied to every researcher in the lab. Taylor, who had only become more convinced of the value of networking over the previous half-decade, also wanted these computers to be interconnected. The Alto. The computer per se was housed in the cabinet at bottom, about the size of a mini-fridge. On arriving at PARC, Metcalfe took over the task of connecting up the lab’s PDP-10 clone to ARPANET, and quickly acquired a reputation as the “networking guy”. Therefore when Taylor asked for an Alto network, his peers turned to Metcalfe. Much like the computers on ARPANET, the Altos at PARC didn’t have much to say to one another. The compelling application for the network, once again, was in enabling human communication – in this case in the form of word and images printed by laser. The core idea behind the laser printer did not originate at PARC, but back East, at the original Xerox research lab in Webster, New York. There a physicist named Gary Starkweather proved that the coherent beam of a laser could be used to deactivate the electrical charge of a xerographic drum, just like the diffuse light used in photocopying up to that point. Properly modulated, the beam could paint a image of arbitrary detail onto the drum, and thus onto paper (since only the uncharged areas of the drum picked up toner). Controlled by a computer, such a machine could produce any combination of images and text that a person might conceive, rather than merely reproducing existing documents like the photocopier. Starkweather received no support for these wild ideas from his colleagues or management in Webster, however, so he got himself transferred to PARC in 1971, where he found a far more receptive audience. The laser printer’s ability to render arbitrary images dot-by-dot provided the perfect mate for the Alto workstation, with its bit-mapped monochrome graphics. With a laser printer, the half-million pixels on a user’s display could be directly rendered onto paper with perfect fidelity. The bit-mapped graphics experience on the Alto. Nothing like this had been seen on a computer display before. Within about a year Starkweather, with the help of several other PARC engineers, had overcome the main technical challenges and built a working prototype of a laser printer, based on the chassis of the workhorse Xerox 7000 printer. It produced pages at the same rate – one per second – at 500 dots per linear inch. A character generator attached the printer crafted text from pre-defined fonts. Free-form imagery (other than what could be generated with custom fonts) was not yet supported, so the network did not need to carry the full 25 million bits-per-second or so required to feed the laser; nonetheless, a tremendous of amount of bandwidth would be needed to keep the printer busy at a time when the 50,000 bits-per-second ARPANET represented the state-of-the-art. PARC’s second generation “Dover” laser printer, from 1976 The Alto Aloha Network How would Metcalfe bridge this huge gap in speed? Finally, we come back to ALOHAnet, for it turns out that Metcalfe knew packet broadcasting better than anyone. The previous summer, while staying in Washington with Steve Crocker on ARPA business, Metcalfe had pulled down volume of the proceedings of the Fall Joint Computer Conference, and came across Abramson’s ALOHAnet paper. He immediately realized that the basic idea was brilliant, but the implementation under-baked. With a few tweaks in the algorithm and assumptions – notably having senders listen for a clear channel before trying to broadcast, and exponentially increasing the re-transmission interval in response to congestion – he could achieve a bandwidth utilization of 90%, rather than the 15% calculated by Abramason. Metcalfe took a short leave from PARC to visit Hawaii, where he integrated his ideas about ALOHAnet into a revised version of his PhD thesis, after Harvard had rejected the original due to a lack of theoretical grounding. Metcalfe originally called his plan to bring packet broadcasting to PARC the “ALTO ALOHA network”. Then, in a memo in May 1973, he rechristened it as Ether Net, invoking the luminiferous ether which nineteenth-century physicists had suposed to carry all electromagnetic radiation. “This will keep things general,” he wrote, “and who knows what other media will prove better than cable for a broadcast network; maybe radio or telephone circuits, or power wiring or frequency-multi-plexed CATV, or microwave environments, or even combinations thereof.” A sketch from Metcalfe’s 1973 Ether Net memo. Starting in June 1973, Metcalfe worked with another PARC engineer, David Boggs, to turn his theoretical concepts for a new high-speed network into a working system. Rather than sending signals over the air like ALOHA, they would bind the radio spectrum within the confines of a coaxial cable, greatly increasing the available bandwidth from the limited radio band allocated to the Menehune. The transmission medium itself was entirely passive, requiring no switching equipment at all for routing messages. It was cheap and easy to connect it hundreds of workstations – PARC engineers just ran coax cable through the building and added taps as needed – and it could handle three million bits per second. Robert Metcalfe and David Boggs in the 1980s, several years after Metcalfe founded 3Com to sell Ethernet technology By the fall of 1974, the complete prototype of the office of the future was up and running in Palo Alto, California – the initial batch of thirty altos with drawing, email, and word processing software, Starkweather’s prototype printer, and Ethernet to connect it all together. A central file server for storing data too large for the Alto’s local disk provided the only other shared resource. PARC originally offered the Ethernet controller as an optional accessory on the Alto, but once the system went live it became clear that it was essential, as the coax coursed with a steady flow of messages, many of them emerging from the printer as technical reports, memos, or academic papers. Simultaneously with the development of the Alto, another PARC project attempted to carry the resource-sharing vision forward in a new direction. The PARC On Line Office System (POLOS), designed and implemented by Bill English and other refugees from Doug Engelbart’s oN-Line System (NLS) project at Stanford Research Institute, consisted of a network of Data General Nova minicomputers. Rather than dedicating each machine to a particular user’s needs, however, POLOS would shuttle work around among them, in order to serve the needs of the system as a whole as efficiently as possible. One machine might be rendering displays for several users, while another handled ARPANET traffic, and yet another ran word processing software. The complexity and coordination overhead of this approach proved unmanageable, and the scheme collapsed under its own weight. Meanwhile, nothing more clearly showed Taylor’s emphatic rejection of the resource-sharing approach to networking than his embrace of the Alto. Alan Kay, Butler Lampson, and the other minds behind the Alto had brought all the computational power a user might need onto an independent computer at their desk, intended to be shared with no one. The function of the network was not to provide access to a heterogeneous set of computer resources, but to carry messages among these islands, each entire of itself, or perhaps to deposit them on some distant shore – for printing or long-term storage. While both email and ALOHA developed under the umbrella of ARPA, the emergence of Ethernet was one of several signs in the first half of the 1970s that computer networking had become something too large and diverse for a single organization to dominate, a trend that we’ll continue to follow next time. [Previous] [Next] Further Reading Michael Hiltzik, Dealers of Lightning (1999) James Pelty, The History of Computer Communications, 1968-1988 (2007) [http://www.historyofcomputercommunications.info/] M. Mitchell Waldrop, The Dream Machine (2001)    

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy:  One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1 In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2 Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff. Acceptable Use Wolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3 In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community. However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over. This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates.  From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis.  Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.   Dual-Use Networks Wolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control.  This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it.  The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4 Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.   A For-Profit Backbone MCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5 The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement. T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet.  Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET. It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume. PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on. Divestiture Rick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access. But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11 The Break-up Though Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12 Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone. When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber.  In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts. The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets.  AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone. However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.    The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries.  This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like.  Second Time Isn’t The Charm Prior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.   Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side.  The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S.  To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable.  The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it: allowed the RBOCs to compete in long-distance telephone markets, lifted restrictions forbidding the same entity from owning both broadcasting and cable services, axed the rules that prevented concentration of radio station ownership. The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network.  The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly.  Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services.  How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards.  The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home.  Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15 During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course.  Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward.  [Previous] [Next] Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000. “Remarks by Vice President Al Gore at National Press Club“, December 21, 1993. Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth. Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year. To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math. Office of Inspector General, “Review of NSFNET,” March 23, 1993. Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27. Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990. John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991. Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996. The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem. The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”. Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020. Goldstein, The Great Telecom Meltdown, 145. The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software. Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) Shane Greenstein, How the Internet Became Commercial (2015) Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018) Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007) Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Steamships, Part 2: The Further Adventures of Isambard Kingdom Brunel

Iron Empire As far back as 1832, Macgregor Laird had taken the iron ship Alburkah to Africa and up the Niger, making it among the first ship of such construction to take the open sea. But the use of iron hulls in British inland navigation can be traced decades earlier, beginning with river barges in the 1780s. An iron plate had far more tensile strength than even an oaken board of the same thickness. This made an iron-hulled ship stronger, lighter, and more spacious inside than an equivalent wooden vessel: a two-inch thickness of iron might replace two-foot’s thickness of timber.[1]  The downsides included susceptibility to corrosion and barnacles, interference with compasses, and, at least at first, the expense of the material. As we have already seen, the larger the ship, the smaller the proportion of its cargo space that it would need for fuel; but the Great Western and British Queen pushed the limits of the practical size of a wooden ship (in fact, Brunel had bound Great Western’s hull with iron straps to bolster its longitudinal strength and prevent it from breaking in heavy seas).[2] The price of wood in Britain grew ever more dear as her ancient forests disappeared, but to build more massive ships economically also required iron prices to fall: and they did just that, starting in the 1830s, because of a surprisingly simple change in technique. Ironmongers had noticed long ago that their furnaces produce more metal from the same amount of fuel in the winter months. They assumed that the cooler air produced this result, and so by the nineteenth century it had become a basic tenet of the iron-making business that one should blast cool air into the furnace with the bellows to maximize its efficiency.[3] This common wisdom was mistaken; entirely backwards, in fact. In 1825, a Glasgow colliery engineer named James Neilson found that a hotter blast made the furnaces more efficient (it was the dryness, not the coolness, of the winter air that had made the difference). Neilson was asked to consult at an ironworks in the village of Muirkirk which was having difficulty with its furnace. He realized that heating the blast air would expand it, and thus increase the pressure of the air flowing into the furnace, strengthening the blast. In 1828 he patented the method of using a stove to heat the blast air. He convinced the Clyde Ironworks to adopt it, and together they perfected the method over the following few years. The results were astounding. A 600° F blast reduced coal consumption of the furnace by two-thirds and increased output from about five-and-a-half tons of pig iron per day to over eight.[4] On top of all that, this simple innovation allowed the use of plain coal as fuel in lieu of (more expensive) refined coke. Ironmakers had adopted coke in the 1750s because when iron was smelted with raw coal the impurities (especially sulfur) in the fuel made the resulting metal too brittle. But the hot blast sent the temperature inside the furnace so high that it drove the sulfur out in the slag waste rather than baking it into the iron. During the 1830s and 40s, Neilson’s hot blast technique spread from Scotland across all of Great Britain, and drove a rapid increase in iron production, from 0.7 million tons in 1830 to over two million in 1850. This cut the market price per ton of pig iron in half.[5] With its vast reserves of coal and iron, made accessible with the power of steam pumps (themselves made in Britain of British iron and fueled by British coal), Britain was perfectly placed to supply the demand induced by this decline in price. Much of the growth in iron output went to exports, strengthening the commercial sinews of the British empire while providing the raw material of industrialization to the rest of the world. The frenzies of railroad building in the United States and continental Europe in the middle of the nineteenth century relied heavily on British rails made from British iron: in 1849, for example, the Baltimore and Ohio railroad secured 22,000 tons of rails from a Welsh trading concern.[6] The hunger of the rapidly growing United States for iron proved insatiable; circa 1850 the young nation imported about 450,000 tons of British iron per year.[7] Good Engineering Makes Bad Business The virtues of iron were also soon on the brain of Isambard Kingdom Brunel. The Great Western Steam Ship Company’s plan for a successor to Great Western began sensibly enough; they would build a slightly improved sister ship of similar design. But Brunel and his partners were seduced, in the fall of 1838, by the appearance in Bristol harbor of an all-iron channel steamer called Rainbow, the largest such ship yet built. Brunel’s associates Claxton and Patterson took a reconnaissance voyage on her to Antwerp and upon their return all three men became convinced that they should build in iron.[8] As if that were not enough novelty to take on in one design, in May 1840 another innovative ship steamed into Bristol harbor, leaving Brunel and his associates swooning one more. The aptly named Archimedes, designed by Francis Petit Smith, swam through the water with unprecedented smoothness and efficiency, powered by a screw propeller rather than paddle wheels.[9] Any well-educated nineteenth-century engineer knew that paddles wasted a huge amount of energy pushing water down at the front of the wheel and lifting it up at the back. Nor was screw propulsion a surprising new idea in 1840. As we have seen, early steamboat inventors tried out just about every imaginable means of pushing or pulling a ship. In his very thorough Treatise on the Screw Propeller, the engineer John Bourne cites fifty some-odd proposals, patents, or practical attempts at screw propulsion prior toSmith’s.[10] After so many failures, most practical engineers assumed (reasonably enough) that the screw could never replace the proven (albeit wasteful) paddlewheel. The difficulties were numerous, including reducing vibration, transmitting power effectively to the screw, and choosing its shape, size, and angle among many potential alternatives. Most fundamental though, was producing sufficient thrust: early steam engines operated at modest speed, cycling every three seconds or so. At twenty revolutions per minute, a screw would have to be of an impractical diameter to actually push a ship forward rapidly. Smith overcame this last problem with a gearing system to allow the propeller shaft to turn 140 times per minute. His propeller design at first consisted of a true helical screw, of two turns (which created excessive friction), then later a single turn. Then, in 1840 he refitted Archimedes with a more recognizably modern propeller with two blades (each of half a turn).[11] Even with these design improvements, Brunel found that noise and vibration made the Archimedes of 1840 “uninhabitable” for passengers.[12]  But he had unshakeable faith in its potential. No doubt, advocates of the screw could tout many potential advantages over the paddlewheel: a lower center of gravity, a more spacious interior, more maneuverability in narrow channels, and more efficient use of fuel  (especially in headwinds, which caught the paddles full on, and rolling sidelong waves, which would lift one paddlewheel or the other out of the water).[13]  So, the weary investors of the Great Western Steam Ship Company saw the timetable of the  Great Britain’s construction set back once more, in order to incorporate a screw. As steamship historian Stephen Fox put it, “[i]n commercial terms, what the Great Western company needed in that fall of 1840 was a second ship, as soon as possible, to compete with the newly established Cunard line,” but that is not what they would get.[14] The completed ship finally launched in 1843, but did not take to sea for a transatlantic voyage until July 1845, having already cost the company some £200,000 pounds in total. With 322 feet of black iron hull driven by a 1000 horsepower Maudslay engine and a massive 36-ton propeller shaft, she dwarfed Great Western. Her all-iron construction gave an impression of gossamer lightness that fascinated a public used to burly wood.[15] The Launching of the Great Britain. But if her appearance impressed, her performance at sea did not. Her propeller fell apart, her engine failed to achieve the expected speed and she rolled badly in a swell. After major, expensive renovations in the winter of 1845, she ran aground at the end of the 1846 sailing season at Dundrum Bay off Ireland. Her iron hull proved sturdier than the organization that had constructed it: by the time she was at last floated free in August 1847, the Great Western Steam Company had already sunk. Another concern bought Great Britain for £25,000, and she ended up plying the route to Australia, operating mostly by sail.[16] In the long run, Brunel and his partners were right that iron hulls and screw propulsion would surpass wood and paddles, but Great Britain failed to prove it. The upstart Inman steamer line launched the iron-hulled, screw-powered City of Glasgow in 1850, which did prove that the ideas behind Great Britain could be turned to commercial success. But the more conservative Cunard line did not dispatch its first iron-hulled ship on its maiden voyage until 1856. Though even larger than Great Britain, at 376 feet and 3600 tons, the Persia still sported paddlewheels. This did not prevent her from booking more passengers than any other steamship to date, nor from setting a transatlantic speed record.[17] Not until the end of the 1860s did oceanic paddle steamers become obsolete. The Archimedes. Without any visible wheels, she looked deceptively like a typical sailing schooner, but for the telltale smokestack. A Glorious Folly For a time, Brunel walked away from shipbuilding. Then, late in 1851, he began crafting plans for a new liner to far surpass even Great Britain, one large enough to ply the routes to Indian and Australia without coaling stops on the African coast. Stopping to refuel wasted time but also quite a lot of money: coal in Africa cost far more than in Europe, because another ship had to bring it there in the first place.[18]    Because it would sail around Africa, not towards America, the new ship was christened Great Eastern. Monstrous in all its dimensions, the Great Eastern, can only be regarded as a monster in truth, in the archaic sense of “a prodigy birthed outside the natural order of things”; it was without precedent and without issue.[19] Given the total failure of Brunel’s last steam liner company, not to mention other examples of excessive exuberance in his past, such as an atmospheric railway project that shut down within a year, it is hard to conceive of how he was able to convince new backers to finance this wild new idea. He did have the help of one new ally, an ambitious Scottish shipbuilder named John Russell, who was also wracked by career disappointment and eager for a comeback. Together they built an astonishing vessel: at 690 feet long and over 22,000 tons, it exceeded in size every other ship built to its time, and also every other ship built in the balance of the nineteenth century. It would carry (in theory) 4,000 passengers and 18,000 tons of coal or cargo, and mount both paddlewheels and a propeller, the latter powered by the largest steam engine ever built, of 1600 horsepower. Brunel died of a stroke in 1859, and never saw the ship take to sea. That is just as well, for it failed even more brutally than the Great Britain. It was slow, rolled badly, maneuvered poorly, and demanded prodigious quantities of labor and fuel.[20] Like Great Britain, after a brief service its owners auctioned it off to new buyers at a crushing loss. Great Eastern did, however, have still in its future a key role to play in the extension of British imperial and commercial power, as we shall see. The Great Eastern in harbor in Wales in 1860. Note the ‘normal-size’ three-masted ship in the foreground for scale. I have lingered on Brunel’s career for so long not because he was of unparalleled import to the history of the age of steam (he was not), but because his character and his ambition fascinate me. He innovated boldly, but rarely as effectively as his more circumspect peers, such as Samuel Cunard. Much—though certainly not all—of his career consists of glorious failure. Whether you, dear reader, emphasize the glory or the failure, may depend on the width of the romantic streak that runs through your soul.

Read more
Internet Ascendant, Part 1: Exponential Growth

In 1990, John Quarterman, a networking consultant and UNIX expert, published a comprehensive survey of the state of computer networks. In a brief section on the potential future for computing, he predicted the appearance of a single global network for “electronic mail, conferencing, file transfer, and remote login, just as there is now one worldwide telephone network and one worldwide postal system.” But he did not assign any special significance to the Internet in this process. Instead, he assumed that the worldwide net would “almost certainly be run by government PTTs”, except in the United States, “where it will be run by the regional Bell Operating Companies and the long-distance carriers.” It will be the purpose of this post to explain how, in a sudden eruption of exponential growth, the Internet so rudely upset these perfectly natural assumptions. Passing the Torch The first crucial event in the creation of the modern Internet came in the early 1980s, when the Defense Communication Agency (DCA) decided to split ARPANET in two. The DCA had taken control of the network in 1975. By that time, it was clear that it made little sense for the ARPA Information Processing Techniques Office (IPTO), a blue sky research organization, to be involved in running a network that was being used for participants’ daily communications, not for research about communication. ARPA tried and failed to hand off the network to private control by AT&T. The DCA, responsible for the military’s communication systems, seemed the next best choice. For the first several years of this new arrangement, ARPANET prospered under a regime of benign neglect. However, by the early 1980s, the Department of Defense’s aging data communications infrastructure desperately needed an upgrade. The intended replacement, AUTODIN II, which DCA had contracted with Western Union to construct, was foundering. So DCA’s leaders appointed Colonel Heidi Hieden to come up with an alternative. He proposed to use the packet-switching technology that DCA already had in hand, in the form of ARPANET, as the basis for the new defense data network. But there was an obvious problem with sending military data over ARPANET – it was rife with long-haired academics, including some who were actively hostile to any kind of computer security or secrecy, such as Richard Stallman and his fellow hackers at the MIT Artificial Intelligence Lab. Heiden’s solution was to bifurcate the network. He would leave the academic researchers funded by ARPA on ARPANET, while splitting the computers used at national defense sites off onto a newly formed network called MILNET. This act of mitosis had two important consequences. First, by decoupling the militarized and non-militarized parts of the network, it was the initial step toward transferring the Internet to civilian, and eventually, private, control. Secondly, it provided the proving ground for the seminal technology of the Internet, the TCP/IP protocol, which had first been conceived half a decade before. DCA required all the ARPANET nodes to switch over to TCP/IP from the legacy protocol by the start of 1983. Few networks used TCP/IP at that point in time, but now it would link the two networks of the proto-Internet, allowing message traffic to flow between research sites and defense sites when necessary. To further ensure the long-term viability of TCP/IP for military data networks, Heiden also established a $20 million fund to pay computer manufacturers to write TCP/IP software for their systems (1). This first step in the gradual transfer of the Internet from the military to private control provides as good an opportunity as any to bid farewell to ARPA and the IPTO. Its funding and influence, under the leadership of J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, had produced, directly or indirectly, almost all of the early developments in interactive computing and networking. The establishment of the TCP/IP standard in the mid-1970s, however, proved to be the last time it played a central role in the history of computing (2). The Vietnam War provided the decisive catalyst for this loss of influence. Most research scientists had embraced the Cold war defense-sponsored research regime as part of a righteous cause to defend democracy. But many who came of age in the 1950s and 1960s lost faith in the military and its aims due to the quagmire in Vietnam. That included Taylor himself, who quit IPTO in 1969, taking his ideas and his connections to Xerox PARC. Likewise, the Democrat-controlled Congress, concerned about the corrupting influence of military money on basic scientific research, passed amendments requiring defense money to be directed to military applications. ARPA reflected this change in funding culture in 1972 by renaming itself as DARPA, the Defense Advanced Research Projects Agency. And so the torch passed to the civilian National Science Foundation (NSF). By 1980, with $20 million dollars in funding, the NSF accounted for about half of federal computer science research spending in the U.S, about $20 million (3). Much of that funding would soon be directed toward a new national computing network, the NSFNET. NSFNET In the early 1980s, Larry Smarr, a physicist at the University of Illinois, visited the Max Planck Institute in Munich, which hosted a Cray supercomputer that it made readily available to European researchers. Frustrated at the lack of equivalent resources for scientists in the U.S., he proposed that the NSF fund a series of supercomputing centers across the country (4). The organization responded to Smarr and other researchers with similar complaints by creating the Office of Advanced Scientific Computing in 1984, which went on to fund a total of five such centers, with a total five-year budget of $42 million. They stretched from Cornell in the northeast of the country to San Diego in the southwest. In between, Smarr’s own university (Illinois) received its own center, the National Center for Supercomputing Applications (NCSA). But these centers alone would only do so much to improve access to computer power in the U.S. Using the computers would still be difficult for users not local to any of the five sites, likely requiring a semester or summer fellowship to fund a long-term visit. And so NSF decided to also build a computer network. History was repeating itself – making it possible to share powerful computing resources with the research community was exactly what Taylor had in mind when he pushed for the creation of ARPANET back in the late 1960s. The NSF would provide a backbone that would span the continent by linking the core supercomputer sites, then regional nets would connect to those sites to bring access to other universities and academic labs. Here NSF could take advantage of the support for the Internet protocols that Heiden had seeded, by delegating the responsibility of creating those regional networks to local academic communities. Initially, the NSF delegated the setup and operation of the network to the NCSA at the University of Illinois, the source of the original proposal for a national supercomputer program. The NCSA, in turn, leased the same type of 56 kilobit-per-second lines that ARPANET had used since 1969, and began operating the network in 1986. But traffic quickly flooded those connections (5). Again mirroring the history of ARPANET, it quickly became obvious that the primary function of the net would be communications among those with network access, not the sharing of computer hardware among scientists. One can certainly excuse the founders of ARPNET for not knowing that this would happen, but how could the same pattern repeat itself almost two decades later? One possibility is that it’s much easier to justify a seven-figure grant to support the use of eight figures worth of computing power, than to justify dedicating the same sums to the apparently frivolous purpose of letting people send email to one another. This is not to say that there was willful deception on the part of the NSF, but that just as the anthropic principle posits that the physical constants of the universe are what they are because otherwise we couldn’t exist to observe them, so no publicly-funded computer network could have existed for me to write about without a somewhat spurious justification. Now convinced that the network itself was at least as valuable as the supercomputers that had justified its existence, NSF called on outside help to upgrade the backbone with 1.5 megabit-per-second T1 lines (6). Merit Network, Inc., won the contract, in conjunction with MCI and IBM, securing $58 million in NSF funding over an initial five year grant to build and operate the network. MCI provided the communications infrastructure, IBM the computing hardware and software for the routers. Merit, a non-profit that ran a computer network that linked the University of Michigan campuses (7), brought experience operating an academic computer network, and gave the whole partnership a collegiate veneer that made it more palatable to NSF and the academics who used NSFNET. Nonetheless, the transfer of operations from NCSA to Merit was a clear first step towards privatization. Traffic flowed through Merit’s backbone from almost a dozen regional networks, from the New York State Education and Research Network (NYSERNet), interconnected at Cornell in Ithaca, to the California Education and Research Federation Network (CERFNet -no relation to Vint), which interconnected at San Diego. Each of these regional networks also internetted with countless local campus networks, as Unix machines appeared by the hundreds in college labs and faculty offices. This federated network of networks became the seed crystal of the modern Internet. ARPANET had connected only well-funded computer researchers at elite academic sites, but by 1990 almost anyone in post-secondary education in the U.S – faculty or student – could get online. There, via packets bouncing from node to node – across their local Ethernet, up into the regional net, then leaping vast distances at light speed via the NSFNET backbone – they could exchange email or pontificate on Usenet with their counterparts across the country. With far more academic sites now reachable via NSFNET than ARPANET, The DCA decommissioned that now-outmoded network in 1990, fully removing the Department of Defense from involvement in civilian networking. Takeoff Throughout this entire period, the number of computers on NSFNET and its affiliated networks – which we may now call the Internet (8) – was roughly doubling each year. 28,000 in December 1987, 56,000 in October 1988, 159,000 in October 1989, and so on. It would continue to do so well into the mid-1990s, at which point the rate slowed only slightly (9). The number of networks on the Internet grew at a similar rate – from 170 in July of 1988 to 3500 in the fall of 1991. The academic community being an international one, many of those networks were overseas, starting with connections to France and Canada in 1988. By 1995, the Internet was accessible from nearly 100 countries, from Algeria to Vietnam (10). Though it’s much easier to count the number of  machines and networks than the number of actual users, reasonable estimates put that latter figure at 10-20 million by end of 1994 (11).  Any historical explanation for this tremendous growth is challenging to defend in the absence of detailed data about who was using the Internet for what, at what time. A handful of anecdotes can hardly suffice to account for the 350,000 computers added to the Internet between January 1991 and January 1992, or the 600,000 in the year after that, or the 1.1 million in the year after that. Yet I will dare to venture onto this epistemically shaky ground, and assert that three overlapping waves of users account for the explosion of the Internet, each with their own reasons for joining, but all drawn by the inexorable logic of Metcalfe’s Law, which indicates that the value (and thus the attractive force) of a network increases with the square of its number of participants. First came the academic users. The NSF had intentionally spread computing to as many universities as possible. Now every academic wanted to be on board, because that’s where the other academics were. To be unreachable by Internet email, to be unable to see and participate in the latest discussions on Usenet, was to risk missing an important conference announcement, a chance to find a mentor, cutting-edge pre-publication research, and more. Under this pressure to be part of the online academic conversation, universities quickly joined onto the regional networks that could connect them to the NSFNET backbone. NEARNET, for example, which covered the six states of the New England region, grew to over 200 members by the early 1990s. At the same time, access began to trickle down from faculty and graduate students to the much larger undergraduate population. By 1993, roughly 70% of the freshman class at Harvard had edu email accounts. By that time the Internet also became physically ubiquitous at Harvard and its peer institutions, which went to considerable expense to wire Ethernet into not just every academic building, but even the undergrad dormitories (12). It was surely not long before the first student stumbled into his or her room after a night of excess, slumped into their chair, and laboriously pecked out an electronic message that they would regret in the morning, whether a confession of love or a vindictive harangue. In the next wave, the business users arrived, starting around 1990. As of that year, 1151 .com domains had been registered. The earliest commercial participants came from the research departments of high-tech companies (Bell Labs, Xerox, IBM, and so on) They, in effect, used the network in an academic capacity. Their employers’ business communications went over other networks. By 1994, however, over 60,000 .com domain names existed, and the business of making money on the Internet had begun in earnest (13).  As the 1980s waned, computers were becoming a part of everyday life at work and home in the U.S, and the importance of a digital presence to any substantial business became obvious. Email offered easy and extremely fast communication with co-workers, clients, and vendors. Mailing lists and Usenet provided both new ways of keeping up to date with a professional community, and new forms of very cheap advertising to a generally affluent set of users. A wide variety of free databases could be accessed via the Internet – legal, medical, financial, and political. New graduates arriving into the workforce from fully-wired campuses also became proselytes for the Internet at their employers. It offered access to a much larger set of users than any single commercial service (Metcalfe’s Law again), and once you paid a monthly fee for access to the net, almost everything else was free, unlike the marginal hourly and per-message fees charged by CompuServe and its equivalents. Early entrants to the Internet marketplace included mail-order software companies like The Corner Store of Litchfield, Connecticut, which advertised in Usenet discussion groups, and The Online Bookstore, an electronic books seller founded over a decade before the Kindle by a former editor at Little, Brown (14). Finally came the third wave of growth, the arrival of ordinary consumers, who began to access the Internet in large numbers in the mid-1990s. By this point Metcalfe’s Law was operating in overdrive. Increasingly, to be online meant to be on the Internet. Unable to afford T1 lines to their homes, consumers almost always accessed the Internet over a dial-up modem. We have already seen part of that story, with the gradual transformation of commercial BBSes into commercial Internet Service Providers (ISPs). This change benefited both the users (whose digital swimming pool suddenly grew into an ocean) and the BBS itself, which could run a much simpler business as an intermediary between the phone system and a T1 on-ramp to the Internet, without maintaining their own services. Larger online services followed a similar pattern. By 1993, all of the major national-scale services in the U.S. – Prodigy, CompuServe, GEnie and upstart America Online (AOL) – offered their 3.5 million combined subscribers the ability to send email to Internet addresses. Only laggard Delphi (with less than 100,000 subscribers), however, offered full Internet access (15). Over the next few years, though, the value of access to the Internet – which continued to grow exponentially – rapidly outstripped that of accessing the services’ native forums, games, shopping and other content. 1996 was the tipping point – by October of that year, 73% of those online reported having used the World Wide Web, compared to just 21% a year earlier (16). The new term “portal” was coined, to describe the vestigial residue of content provided by AOL, Prodigy, and others, to which people subscribed mainly to get access to the Internet.  The Secret Sauce We have seen, then, something of how the Internet grew so explosively, but not quite enough to explain why. Why, in particular, did it become so dominant in the face of so much prior art, so many other services that were striving for growth during the era of fragmentation that preceded it? Government subsidy helped, of course. The funding of the backbone aside, when NSF chose to invest seriously in networking as an independent concern from its supercomputing program, it went all in. The principal leaders of the NSFNET program, Steve Wolff and Jane Caviness, decided that they were building not just a supercomputer network, but a new information infrastructure for American colleges and universities. To this end, they set up the Connections program, which offset part of the cost for universities to get onto the regional nets, on the condition that they provide widespread access to the network on their campus. This accelerated the spread of the Internet both directly and indirectly. Indirectly, since many of those regional nets then spun-off for-profit enterprises using this same subsidized infrastructure  to sell Internet access to businesses. But Minitel had subsidies, too. The most distinct characteristic of the Internet, however, was it layered, decentralized architecture, and attendant flexibility. IP allowed networks of a totally different physical character to share the same addressing system, and TCP ensured that packets were delivered to their destination. And that was all. Keeping the core operations of the network simple allowed virtually any application to be built atop it. Most importantly, any user could contribute new functionality, as long as they could get others to run their software. For example, file transfer (FTP) was among the most common uses of the early Internet, but it was very hard to find servers that offered files of interest for download except by word-of-mouth. So enterprising users built a variety of protocols to catalog and index the net’s FTP servers, such as Gopher, Archie, and Veronica. The OSI stack also had this flexibility, in theory, and the official imprimatur of international organizations and telecommunications giants as the anointed internetworking standard. But possession is nine-tenths of the law, and TCP/IP held the field, with the decisive advantage of running code on thousands, and then millions, of machines. The devolution of control over the application layer to the edges of the network had another important implication. It meant that large organizations, used to controlling their own bailiwick, could be comfortable there. Businesses could set up their own mail servers and send and receive email without all the content of those emails sitting on someone else’s computer. They could establish their own domain names, and set up their own websites, accessible to everyone on the net, but still entirely within their own control.  The World Wide Web – ah – that was the most striking example, of course, of the effects of layering and decentralized control. For two decades, systems from the time-sharing systems of the 1960s through to the likes of CompuServe and Minitel had revolved around a handful of core communications services – email, forums, and real-time chat. But the Web was something new under the sun. The early years of the web, when it consisted entirely of bespoke, handcrafted pages, were nothing like its current incarnation. Yet bouncing around from link to link was already strangely addictive – and it provided a phenomenally cheap advertising and customer support medium for businesses. None of the architects of the Internet had planned for the Web. It was the brainchild of Tim Berners-Lee, a British engineer at the European Organization for Nuclear Research (CERN), who created it in 1990 to help disseminate information among the researchers at the lab. Yet could easily rest atop TCP/IP, and re-use the domain-name system, created for other purposes, for its now-ubiquitous URLs. Anyone with access to the Internet could put up a site, and by the mid-1990s it seemed everyone had – city governments, local newspapers, small businesses, and hobbyists of every stripe.  Privatization In this telling of the story of the Internet’s growth, I have elided some important events, and perhaps left you with some pressing questions. Notably, how did businesses and consumers get access to an Internet centered on NSFNET in the first place – to a network funded by the U.S. government, and ostensibly intended to serve the academic research community? To answer this, the next installment will revisit some important events which I have quietly passed over, events which gradually but inexorably transformed a public, academic Internet into a private, commercial one. [Previous] [Next] Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) John S. Quarterman, The Matrix (1990) Peter H. Salus, Casting the Net (1995) Footnotes Note: The latest version of the WordPress editor appears to have broken markdown-based footnotes, so these are manually added, without links. My apologies for the inconvenience. Abbate, Inventing the Internet, 143. The next time DARPA would initiate a pivotal computing project was with the Grand Challenges for autonomous vehicles of 2004-2005. The most famous project in-between, the billion-dollar AI-based Strategic Computing Initiative of the 1980s, produced a few useful applications for the military, but no core advances applicable to the civilian world.  “1980 National Science Foundation Authorization, Hearings Before the Subcommittee on Science, Researce [sic] and Technology of the Committee on Science and Technology”, 1979.  Smarr, “The Supercomputer Famine in American Universities” (1982)  A snapshot of what this first iteration of NSFNET was like can be found in David L. Mills, “The NSFNET Backbone Network” (1987)  The T1 connection standard, established by AT&T in the 1960s, was designed to carry twenty-four telephone calls, each digitally encoded at 64 kilobits-per-second.  MERIT originally stood for Michigan Educational Research Information Triad. The state of Michigan pitched in $5 million of its own to help its homegrown T1 network get off the ground. Of course, the name and concept of Internet predates the NSFNET. The Internet Protocol dates to 1974, and there were networks connected by IP prior to NSFNET. ARPANET and MILNET we have already mentioned. But I have not been able to find any reference to “the Internet” – a single, all-encompassing world-spanning network of networks – prior to the advent of the three-tiered NSFNET. See this data. Given this trend, how could Quarterman fail to see that the Internet was destined to dominate the world? If the recent epidemic has taught is anything, it is that exponential growth is extremely hard for the human mind to grasp, as it accords with nothing in our ordinary experience.  These figures come from Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996).  See Salus, Casting the Net, 220-221.  Mai-Linh Ton, “Harvard, Connected: The Houses Got Internet,” The Harvard Crimson, May 22, 2017. IAPS, “The Internet in 1990: Domain Registration, E-mail and Networks;” RFC 1462, “What is the Internet;” Resnick and Taylor, The Internet Business Guide, 220.  Resnick and Taylor, The Internet Business Guide, xxxi-xxxiv. Pages 300-302 lay out the pros and cons of the Internet and commercial online services for small businesses.  Statistics from Rosalind Resnick, Exploring the World of Online Services (1993). Pew Research Center, “Online Use,” December 16, 1996. Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Interactive Computing: A Counterculture

In 1974, Ted Nelson self-published a very unusual book. Nelson lectured on sociology at the University of Illinois at Chicago to pay the bills, but his true calling was as a technological revolutionary. In the 1960s, he had dreamed up a computer-based writing system which would preserve links among different documents. He called the concept “hypertext” and the system to realize it (always half-completed and just over the horizon) “Project Xanadu.” He had become convinced in the process that his fellow radicals had computers all wrong, and he wrote his book to explain why. Among the activist youth of the 1960s counterculture, the computer had a wholly negative image as a bureaucratic monster, the most advanced technology yet for allowing the strong to dominate the weak. Nelson agreed that computers were mostly used in a brutal way, but offered an alternative vision for what the computer could be: an instrument of liberation. His book was really two books bound together, each with its own front cover—Computer Lib and Dream Machines—allowing the book to be read from either side until the two texts met in the middle. Computer Lib explained what computers are and why it is important for everyone to understand them, and Dream Machines explained what they could be, when fully liberated from the tyranny of the “priesthood” that currently controlled not only the machines themselves, but all knowledge about them. “I have an axe to grind,” Nelson wrote, I want to see computers useful to individuals, and the sooner the better, without necessary complication or human servility being required. …THIS BOOK IS FOR PERSONAL FREEDOM AND AGAINST RESTRICTION AND COERCION. … A chant you can take to the streets: COMPUTER POWER TO THE PEOPLE! DOWN WITH CYBERCRUD![1] If the debt Nelson’s cri de coeur owed to the 1960s counterculture wasn’t clear enough, Nelson made it explicit by listing his “Counterculture Credentials” as a writer, showman, “Onetime seventh-grade dropout,” “Attendee of the Great Woodstock Festival,” and more, including his astrological sign.[2] The front covers of Ted Nelson’s “intertwingled” book, Computer Lib / Dream Machines. Nelson’s manifesto is the most powerful piece of evidence of one popular way to tell the story of the rise of the personal computer: as an outgrowth of the 1960s counterculture. Surely more than geographical coincidence accounts for the fact that Apple Computer was born on the shores of the same bay where, not long before, Berkeley radicals had protested and Haight-Ashbury deadheads had partied? The common through line of personal liberation is clear, and Nelson was not the only countercultural figure who wanted to bring computer power to the people. Lee Felsenstein, a Berkeley engineering drop-out (and then eventual graduate) with much stronger credentials in radical politics than Nelson, invested much of his time in the 1970s on projects to make computers more accessible such as Community Memory, which offered a digital bulletin board via public computer terminals set up at several locations in the Bay Area. In Menlo Park, likewise, anyone off the street could come in and use a computer at Bob Albrecht’s People’s Computer Company. Both Felsenstein and Albrecht had clear and direct ties to the early personal computer industry, Felsenstein as a hardware designer and Albrecht as a publisher. The two most seminal early accounts of the personal computer’s history, Steven Levy’s Hackers: Heroes of the Computer Revolution, andPaul Freiberger and Michael Swaine’s, Fire in the Valley: The Making of The Personal Computer, both argued that the personal computer came into existence because of people like Felsenstein and Albrecht (whom Levy called long-haired, West Coast, “hardware hackers”), and their emphasis on personal liberation through technology. John Markoff extended this argument to book length with What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer. Stewart Brand put it succinctly in a 1995 article in Time magazine: “We Owe it All to the Hippies.”[3] This story is appealing, but not quite right. The influence of countercultural figures in promoting personal computing was neither necessary, nor sufficient, to explain the sudden explosion of interest in the personal computer caused by the Altair. Not necessary, because the Altair existed primarily because of two people who had nothing to do with the radical left or hippie idealism: the Albuquerque Air Force veteran and electronics lover Ed Roberts, and the New York hobby magazine editor Les Solomon. Not sufficient because it addresses only supply, not demand: why, when personal computers did become available, were there many thousands of takers out there looking to buy the personal liberation that men like Nelson and Albrecht were selling? These people were not, for the most part, hippies or radicals either. The countercultural narrative seems plausible when one zooms in on the activities happening around the San Francisco Bay, but the personal computer was a national phenomenon; orders for Altairs poured in to Albuquerque from across the country. Where did all of these computer lovers come from? Getting Hooked In the 1950s, researchers working at a laboratory affiliated with MIT synthesized an electronic concoction in their labs that, in the decades to come, transformed the world. The surprising byproduct of work on an air defense system, it proved to be highly addictive, at least to those of a certain personality type: inquisitive and creative, but also fascinated by logic and mathematics. The electronic computer, as originally conceived in the 1940s, emulated a room full of human computers. You provided it with a set of instructions for performing a complex series of calculations—a simulation of an atomic explosion, say, or the proper angle and explosive charge required to get an artillery piece to hit a target at a given distance, and then came back later to pick up the result. A “batch-processing” culture of computing developed around this model, where computer users brought a computer program and data to the computer’s operators in the form of punched cards. These operators fed in batches of these cards and fed them to the computer for processing, and then later extracted the results on a new set of punched cards. The user then picked up the results and then either walked away happy or (more often), noticed an error, scrutinized their program for bugs, made adjustments, and tried again. By the early 1960s, this batch-processing culture had become strongly associated with IBM, which had parlayed its position as the leader in mechanical data-processing equipment into dominance of electronic computing as well. However, the military faced many problems that could not be pre-calculated, and required an instantaneous decision, calling for a “real-time” computer that could provide an answer to one question after another, with seconds or less between each response. The first fusion of real-time problem solving with the electronic computer came in the form of a flight simulator project at MIT under the leadership of electrical engineer Jay Forrester, which, through a series of twists and turns and the stimulus of the Cold War, evolved into an air defense project with the backronym of Semi-Automated Ground Environment (SAGE). Housed at Lincoln Laboratory, a government facility about thirty miles to the northwest of MIT, SAGE became a mammoth project that spawned an entirely new form of computing as an accidental side effect. An operator interacting with a SAGE terminal with a light gun. The SAGE system demanded a series of powerful computers (to be constructed by IBM), two for each of the air defense centers to be built across North America (one acted as a back-up in case the other failed). Each would serve multiple cathode-ray screen terminals showing an image of incoming radar blips, which the operator could select to learn more information and possibly marshal air defense assets against them. At first, the project leads assumed these computer centers would use vacuum tubes, the standard logic component for almost all computers throughout the 1950s. But the invention of the transistor offered the opportunity to make a smaller and more reliable solid-state computer. So, in 1955-56, Wesley Clark and Ken Olsen oversaw the design and construction of a small, experimental transistor-based computer, TX-0, as a proof-of-concept for a future SAGE computer. Another, larger test machine called TX-2 followed in 1957-58.[4] The most historically significant feature of these computers, however, was the fact that, after being completed, they had no purpose. Having proved that they could be built, their continued existence was superfluous to the SAGE project, so these very expensive prototypes became Clark’s private domain, to be used more or less as he saw fit. Most computers operated in batch-processing mode because it was the most efficient way to use a very expensive piece of capital equipment, keeping it constantly fed with work to do. But Clark didn’t particularly care about that. Lincoln Lab computers had a tradition of hands-on use, going all the way back to the original flight simulator design which was intended for real-time interaction with a pilot, and Clark believed that real-time access to a computer assistant could be a powerful means for advancing scientific research.[5] The TX-0 at MIT, likely taken in the late 1950s. And so, a number of people at MIT and Lincoln Lab got to have the experience of simply sitting down and conversing directly with the TX-0 or TX-2 computer. Many of them got hooked on this interactive mode of computing. The process of instant feedback from the computer when trying out a program, which could then be immediately adjusted and tried again, felt very much like playing a game or solving a puzzle. Unlike the batch-processing mode of computing that was standard by the late 1950s, in interactive computing the speed at which you got a response from the computer was limited primarily by the speed at which you could think and type. When a user got into the flow, hours could disappear like minutes. J.C.R. Licklider was a psychologist employed to help with SAGE’s interface with its human operators. The experience of interacting with the TX-0 at Lincoln Lab struck him with the force of revelation. He thereafter became an evangelist for the power of interactive computers to multiply human intellectual power via what he called “man-computer symbiosis”: Men will set the goals and supply the motivations, of course, at least in the early years. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. … The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs …In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.[6] Ivan Sutherland was another convert: he developed a drafting program called Sketchpad on the TX-2 at Lincoln Lab for his MIT doctoral thesis and later moved to the University of Utah, where he became the founding father of the field of computer graphics. Lincoln also shipped the TX-0, entirely surplus to its needs after the arrival of TX-2, to the MIT Research Laboratory for Electronics (RLE), where it became the foundation—the temple, the dispensary—for a new “hacker” subculture of computer addicts, who would finagle every spare minute they could on the machine, roaming the halls of the RLE well past midnight. The hackers compared the experience of being in total control of a computer to “getting in behind the throttle of a plane,” “playing a musical instrument,” or even “having sex for the first time”: hyperbole, perhaps, similar to Arnold Schwarzenegger’s famous claim about the pleasures of pumping iron.[7]   It is worth pausing to note here the extreme maleness of this group: not a single woman is mentioned among the MIT hackers in Steven Levy’s eponymous book the topic. This is unsurprising since very few women attended MIT; until 1960 they were technically allowed but not encouraged to enroll. But this severe imbalance of the sexes did not change much with time. Almost all the people who got hooked on computers as interactive computing spread beyond MIT were also men. It was certainly not the case that the computing profession as a whole was overwhelmingly male circa 1960: at that time women probably occupied a third or more of all programming jobs. But at the time, almost all of those jobs involved neatly coiffed business people running data processing workloads in large corporate or government offices, not disheveled hackers clacking away at a console into the wee hours. For whatever reason, men showed a much greater predilection than women to get lost in the rational yet malleable corridors of the digital world, to enjoy using computers for the sake of using computers. This fact likely produced the eventual transformation of computer science into an overwhelmingly male field, a development we may revisit later in this story. But for now, back to the topic at hand.[8] Minicomputers: The DIY Computer While Clark was exploring the potential of computers as a scientific instrument, his engineering partner, Ken Olsen, saw the market potential for selling small computers like the TX-0. Having worked closely with IBM on the SAGE contract, he came away unimpressed with their bureaucratic inefficiency. He thought he could do better, and, with help from one of the first venture capital firms and Harlan Anderson, another Lincoln alum, he went into business. Warned by the head of the firm to avoid the term “computer,” which would frighten investors with the prospects of an expensive uphill struggle against established players like IBM, Olsen called his company Digital Equipment Corporation, or DEC.[9] In 1957, Olsen set up shop in an old textile mill on the Assabet River about a half-hour west of Lincoln Lab. There the company remained until the early 1990s, at the end of Olsen’s tenure and the beginning of the company’s terminal decline. Olsen, an abstemious, church-going Scandinavian, stayed in suburban Massachusetts for nearly all of his adult life; he and his wife lived out their last years with a daughter in Indiana. It is hard to imagine someone who less embodies the free-wheeling sixties counterculture than Ken Olsen. But his business became the vanguard for and symbol of a computer counterculture; one that would raise a black flag of rebellion against the oppressive regime of IBM-ism and spread the joy of interactive computing far beyond MIT, sprinkling computer addicts across the country. DEC began selling its first computer, the PDP-1 (for Programmed Data Processor) in 1959. Its design bore a fair resemblance to that of the TX-0, and proved similarly addictive to young hackers when one was donated to MIT in 1961. A whole series of other models followed, but the most ground-breaking was the PDP-8, released in 1965: a computer about the size of a blue USPS collection box for just $18,000 dollars.  Not long after, someone (certainly not the straightlaced Olsen), began calling this kind of small computer a minicomputer, by analogy to the newly-popular miniskirt. A DEC ad campaign described PDP-8 computers as “approachable, variable, easy to talk to, personal machines.” A 1966 advertisement depicting various PDP-8 models juxtaposed with cuddly teddy bears. [Datamation, October 1966] Up to that point, the small, relatively inexpensive computers that did exist typically stored their short-term memory on the magnetized surface of a spinning mechanical drum. This put a hard ceiling on how fast they could calculate. But the PDP-8 used fast magnetic core memory, bringing high-speed electronic computing within reach of even quite small science and engineering firms, departments and labs. PDP-8s were also deployed as control systems on factory floors, and even placed on a tractor. They sold in large numbers, for a computer—50,000, all told, over a fifteen-year lifespan—and became hugely influential, spawning a whole industry of competing minicomputer makers, and later inspiring the design for Intel’s 4004 microprocessor.[10] In the early 1960s, IBM, under Thomas Watson, Jr., established itself as the dominant manufacturer of mainframe computers in the United States (and therefore, in effect, the world). Its commissioned sales force cultivated deep relationships with customers, which lasted well beyond the closing of the deal. IBM users leased their machines on a monthly basis, and in return they got access to an extensive support and service network, a wide array of peripheral devices (many of which derived from IBM’s pre-existing business as a maker of punched-card processing machinery), system software, and even application software for common business needs like payroll and inventory tracking. IBM expected their mainframe customers to have a dedicated data processing staff, independent from the actual end users of the computer, people responsible for managing the computer’s hardware and software and their firm’s ongoing relationship with IBM.[11] DEC culture dispensed with all of that; it became a counter-culture, representing everything that IBM was not. Olsen expected end users take full ownership of their machine in every sense. The typical buyer was expected to be an engineer or scientist; an expert on their own needs, who could customize the system for their application, write their own software, and administer the machine themselves. IBM had technical staff with the interest and skills needed to build interactive systems. Andy Kinslow, for example, led a time-sharing project (more on time-sharing shortly) at IBM in the mid-1960s; he wanted to give engineers like himself that hands-on-the-console experience that the MIT hackers had fallen in love with.  But the eventual product, TSS/360, had serious technical limitations at launch in 1967, and was basically ignored by IBM afterwards.[12] This came down to culture: IBM’s product development and marketing focused on the needs of their core data-processing customers who wanted more powerful batch-processing systems with better software and peripheral support, not by the interests of techies and academics who wanted hands-on computer systems and didn’t mind getting their hands dirty. And so, the latter bought from DEC and other smaller outfits. As an employee of Scientific Data Systems (another successful computer startup of the 1960s) put it: There was, of course, heavy spending on scientific research throughout the sixties, and researchers weren’t like the businessmen getting out the payroll. They wanted a computer, they were enchanted with what we had, they loved it like Ferrari or a woman. They were very forgiving. If the computer was temperamental you’d forgive it, the way you forgive a beautiful woman.[13] DEC customers included federally-funded laboratories, engineering firms, technical divisions of major corporate conglomerates, and, of course, universities. They worked predominantly onreal-time projects in which a computer interacted directly with human users or some kind of industrial or scientific equipment: doing on-demand engineering calculations for a chemical manufacturer, controlling tracing machinery for physics data analysis, administering experiments for psychological research, and more.[14] They shared knowledge and software through a community organization called DECUS, the Digital Equipment Computer Users’ Society. IBM users had founded a similar organization, SHARE, in 1955, but it had a different culture from the start, one that derived from the data-processing orientation of IBM. SHARE’s structure assumed that each participating organization had a computing center, distinct from its other operational functions, and it was the head of that computing center who would participate in SHARE and collaborate with other sites on building systems software (operating systems, assemblers, and the like). The end users of computers, who worked outside the computing center, could not participate in SHARE at all, in the beginning. At most DEC sites, no such distinction between users and operators existed.[15] My father, a researcher specializing in computerized medical records, was part of the DEC culture, and co-authored at least one paper for DECUS, CJ McDonald and B Bhargava, “Ambulatory Care Information Systems Written in BASIC-Plus,” DECUS Proceedings (Fall 1973). Here he is pictured at top left, in 1973, in the terminal room for his research institute’s PDP-11 [Regenstrief Institute] DECUS, like SHARE, maintained an extensive program library: for reading and writing to peripheral devices, assembling and compiling human-readable code into machine language, debugging running programs, calculating math functions not supported by hardware (e.g., trigonometric functions, logarithms, and exponents), and more. Maintaining the library required procedures for reviewing and distributing software: In 1963, for example, users contributed fifty programs, most of which were reviewed by at least two other users, and seventeen of which were certified by the DECUS Programming Committee.[16] Aflame with the possibilities of interactive computing to revolutionize their fields of expertise, from education to clinical medicine, the reach of the DEC devotee sometimes exceeded their grasp: at one DECUS meeting, Air Force doctor Joseph Mundie reminded “the computer enthusiasts,” with gentle understatement, “that even the PDP computer had a few shortcomings when making medical diagnoses.”[17] Though none achieved the market share of DEC, a number of competing minicomputer makers also flourished in the late 1960s in the wake of the PDP-8. They included start-ups like Data General (founded by defectors from DEC, just up the Assabet river in Hudson, Massachusetts), but also established electronics firms like Honeywell, Hewlett-Packard, and Texas Instruments. Many thousands of units were sold, exposing many more thousands of scientists and engineers to the thrill of getting their hands dirty on a computer in their own lab or office. Even among the technical elite at MIT, administrators had considered the hackers’ playful antics with the TX-0 and PDP-1 in the late 1950s and early 1960s a grotesque “misappropriation of valuable machine time.” But department heads acquiring a small ten- or twenty-thousand-dollar computer had much less reason to worry about wastage of spare cycles, and even if they did, most lacked a dedicated operational staff to oversee the machine and ensure its efficient use. Users were left to decide for themselves how to use the computer, and they generally favored their own convenience: hands on, interactive, at the terminal. But even while minis were allowing thousands of ordinary scientists and engineers to enjoy the thrill of having an entire computer at their disposal, another technological development began spreading a simulacrum of that experience among an even wider audience.[18] Time-Sharing: Spreading The Love As we have already seen, a number of people got hooked on interactive computing in and around MIT by 1960, well before the PDP-8 and other cheaper computers became available. Electronic computers could perform millions of operations per second, but in interactive mode, all of that capacity sat unused while the human at the console was thinking and typing. Most administrators—those with the responsibility for allocating limited organizational budgets—recoiled at the idea of allowing a six- or seven-figure machine to sit around idle, wasting that potential processing power, just to make the work of engineers and scientists a bit more convenient. But what if it wasn’t wasted? If you attached four, or forty, or four hundred, terminals to the same computer, it could process the input from one user while waiting for the input from the others, or even process offline batch jobs in the interim between interactive requests. From the point-of-view of a given terminal user, as long as the computer was not overloaded with work, it would still feel as if they had interactive access to their own private machine. The strongest early proponent of this idea of time-sharing a computer was John McCarthy, a mathematician and a pioneer in artificial intelligence who came from Dartmouth College to MIT primarily to get closer access to a computer (Dartmouth had no computer of its own at the time). Unsatisfied with the long turnaround that batch-processing imposed on his exploratory programming, he proposed time-sharing as a way of squaring interactive computing with the other demands on MIT’s computation center.[19] McCarthy’s campaigning eventually led an MIT group led by Fernando “Corby” Corbató to develop the Compatible Time-Sharing System (CTSS)—so-called because it could operate concurrently with the existing batch-processing operations on the Computation Center’s IBM computer. McCarthy also directed the construction of a rudimentary time-sharing system on a PDP-1 at Bolt, Beranek, and Newman, a consulting firm with close ties to MIT. This proved that a less powerful computer than an IBM mainframe could also support time-sharing (albeit on a smaller scale), and indeed even PDP-8s would later host their own time-sharing systems: a PDP-8 could support up to twenty-four separate terminals, if configured with sufficient memory.[20] The most important next steps taken to extend the reach of time-sharing specifically, and interactive computing generally, occurred at McCarthy’s former employer, Dartmouth. John Kemeny, head of the Dartmouth math department, enlisted Thomas Kurtz, a fellow mathematician and liaison to MIT’s Computation Center, to build a computing center of their own at Dartmouth. But they would do it in a very different style. Kemeny was one of several brilliant Hungarian Jews who fled to the U.S. to avoid Nazi persecution. Though of a younger generation than his more famous counterparts such as John von Neumann, Eugene Wigner, and Edward Teller, he stood out enough as a mathematician to be hired onto the Manhattan Project as a mere Princeton undergraduate in 1943. His partner, Kurtz, came from the Chicago suburbs, but also passed through Princeton’s elite math department, as a graduate student. He began doing numerical analysis on computers right out of college in the early 1950s, and his loyalties lay more with the nascent field of computer science than with traditional mathematics. Kurtz (left) and Kemeny (right), inspecting a GE flyer for a promotional shot. The pair started in the early 1960s with a small drum-based Librascope LGP-30 computer, operated in a hands-on, interactive mode. By this time both men were convinced that computers had acquired a civilizational import that would only grow. Having now also seen undergraduates write successful programs in LGP-30 assembly, they also became convinced that understanding and programming computers should be a required component of a liberal education. This kind of expansive thinking about the future of computing was not unusual at the time: other academics at the time were writing about the impact of computers on libraries, education, commerce, privacy, politics, and law. As early as 1961, John McCarthy was giving speeches about how time-sharing would lead to an all-encompassing computer utility that would offer a wide variety of electronic services served up from computers to home and office terminals via the medium of the telephone network.[21] Kurtz proposed that a new, more powerful computer by brought to Dartmouth that would be time-shared (at the suggestion of McCarthy), with terminals directly accessible to all undergraduates: the computer equivalent of an open-stack library. Kemeny applied his political skills (which would eventually bring him the presidency of the university), to sway Dartmouth’s leaders while Kurtz secured grants from the NSF to cover the costs of a new machine. General Electric, which was trying to elbow its way into IBM’s market, agreed to a 60% discount on the two computers Kemeny and Kurtz wanted: a GE-225 mainframe for executing user programs and a Datanet-30 (designed as a message-switching computer for communication networks) for exchanging data between the GE-225 and the user terminals. They called the combined system the Dartmouth Time-Sharing System (DTSS). It did not only benefit Dartmouth students: the university became a regional time-sharing hub via which students at other New England colleges and even high schools got access to computing via remote terminals connected to DTSS by telephone: by 1971 this included fifty schools in all, encompassing a total user population of 13,000[22] Kemeny teaching Dartmouth students about the DTSS system in a terminal room. Beyond this regional influence, DTSS made two major contributions of wider significance to the later development of the personal computer. First was a new programming language called BASIC. Though some students had proved apt with machine-level assembly language, it was certainly too recondite for most. Both Kemeny and Kurtz agreed that to serve all undergraduates, DTSS would need a more abstract, higher-level language that students could compile into executable code. But even FORTRAN, the most popular language of the time in science and engineering fields, lacked the degree of accessibility they strove for. As Kurtz later recounted, by way of example, it had an “almost impossible-to-memorize convention for specifying a loop: ‘DO 100, I = 1, 10, 2’. Is it ‘1, 10, 2’ or ‘1, 2, 10’, and is the comma after the line number required or not?” They devised a more approachable language, implemented with the help of some exceptional undergraduates. The equivalent BASIC loop syntax, FOR I = 1 TO 10 STEP 2, demonstrates the signature feature of the language, the use of common English words to create a syntax that reads somewhat like natural language.[23] The second contribution was DTSS’ architecture itself, which General Electric borrowed to set up its own time-sharing services, not once, but twice: The GE-235 and Datanet-30 architecture became GE’s Mark I time-sharing system, and a later DTSS design based on the GE-635 became GE’s Mark II time-sharing system. By 1968, many firms had set up time-sharing computer centers to which customers could connect computer terminals over the telephone network, paying for time by the hour. Over 40% of this $70 million dollar market (comprising ten of thousands of users) belonged to GE and its Dartmouth-derived systems. The paying customers included Lakeside School in Seattle, whose Mother’s Club raised the funds in 1968 to purchase a terminal with which to access a GE time-sharing center. Among the students exposed to programming BASIC at Lakeside were eighth-grader Bill Gates and tenth-grader Paul Allen.[24] Architecture of the second-generation DTSS system at Dartmouth, circa 1971. GE’s marketing of BASIC through its time-sharing network accelerated the language’s popularity, and BASIC implementations followed for other manufacturers’ hardware, including DEC and even IBM. By the 1970s, helped along by GE, BASIC had established itself as the lingua franca of the interactive computing world. And what BASIC users craved, above all, were games.[25] A Culture of Play Everywhere that the culture of interactive computing went, play followed. This came in the obvious form of computer games, but also in a general playful attitude towards the computer, with users treating the machine as a kind of toy and the act of programming and using it as an end in itself, rather than a means towards accomplishing serious business.   The most famous instance of this culture of play in the early years of MIT hacking came in the form of the contest of reflexes and wills known as Spacewar!. The PDP-1 was unusual for its time in having a two-dimensional graphical display in the form of a circular cathode-ray-tube (CRT) screen. Until the mid-1970s, most people who interacted with computers did so via a teletype. Originally invented for two-way telegraphic messaging, these machines could take in user input like a normal typewriter, send that input over the wire to a remote recipient (the computer in this case), and then automatically type out the characters received over the wire in response. Because of its origins in the SAGE air defense program, however, the MIT PDP-1 also came equipped with a screen designed for radar displays. The MIT hackers had already exercised their playfulness in the form of several earlier games and graphical demos on the TX-0, but it was a hanger-on with no official university affiliation named Stephen “Slug” Russell who created the initial version of Spacewar!, inspired by the space romances of E.E. “Doc” Smith. The game reached a useable form by about February 1962, allowing two players controlling rocket ships to battle across the screen, hurling torpedoes at one another’s spaceships. Other hackers quickly added enhancements, however: a star background that matched Earth’s actual night sky, a sun with gravity, hyperspace warps to escape danger, a score counter, and more. The resulting game was visually exciting, tense, and skill-testing, encouraging the MIT hackers to spend many late nights blasting each other out of the cosmos.[26] Spacewar!’s dependence on a graphical display limited its audience, but Stanford became a hotbed of Spacewar! after John McCarthy moved there in 1962, and its use is also well-attested at the University of Minnesota. In 1970, Nolan Bushnell started his video game business (originally called Syzygy, later Atari), to create an arcade console version of the game, which he called Computer Space. The game’s influence lasted into the 1990s, with the release of the game Star Control and its epic sequel (The Ur-Quan Masters), which introduced the classic duel around a star to my generation of hobbyists.[27] The large majority of minicomputers users who lacked a screen did not, however, lack for games. Teletype games relied on text input and output, but could be just as compelling, ranging from simple guessing games up to rich strategy games like chess. Enthusiasts exchanged paper tapes among themselves, but DECUS also helped to spread information about games and game programs among the DEC user base. The very first volume of the DECUS newsletter, DECUSCOPE, from 1962, contains an homage to SpaceWar!, and a simple dice game appeared in the program library available to all members in 1964. By November 1969, the DECUS software catalog listed thirty-seven games and demos, including simple games like hangman and blackjack, but also more sophisticated offerings like SpaceWar! and The Sumer Game, a Bronze Age resource-management simulation. The catalog of scientific and engineering applications, the primary reason for most owners to have a minicomputer in the first place, numbered fifty-eight.[28] Playfulness could also be expressed in forms other than actual games. The MIT hackers, for example, wrote a variety of programs simply for the fun of it: a tinny music generator, an Arabic to Roman numeral converter, an “Expensive Desk Calculator” for doing simple arithmetic on the $120,000 PDP-1, an “Expensive Typewriter” for composing essays. Using the computer to efficiently achieve some real-world outcome did not necessarily enter their minds: many worked on tools for writing and debugging programs without much thought to using the tools for anything other than more play; often “the process of debugging was more fun than using a program you’d debugged.” As the interactive computing culture expanded from minicomputers to time-sharing systems, fewer and fewer of its acolytes had the heightened taste and technical skill required to extract joy from the creation of compilers and debuggers; but many of these new users could create computer games in BASIC, and all could play them. By about 1970, BASIC gaming had become by far the most widespread culture of computer-based play (though not the only one; the University of Illinois / Control Data Corporation PLATO system, for example, constituted its own, distinct sub-culture). As with the earlier minicomputer teletype games, almost all of these BASIC games had textual interfaces, because hardly anyone yet had access to a graphical display. Dave Ahl, who worked at DEC as an educational marketing manager, began including code listings for BASIC games in his promotional newsletter, EDU. Some were of his own creation (like a conversion of The Sumer Game called Hammurabi), others were contributed by high school and college students using DEC systems at school. They proved so popular that DEC published a compilation in 1973, 101 BASIC Computer Games, which went through three printings. After leaving the company, Ahl wisely retained the rights, and went on to sell over a million copies to computer buyers in the 1980s.[29] While many of these games were derivative of existing board or card games, others, like SpaceWar!, created whole new forms of play, unique to the computer. Unlike SpaceWar!, most of these were single-player experiences that relied on the computer to hide information, gradually revealing a novel world to the user as they explored. Hide and Seek, for example, a simple game written by high school students about searching a grid for a group of hiders, evolved into a more complex searching game called Hunt the Wumpus, with many later variants. Computer addicts overlapped substantially with Star Trek fans, and so a genre of Star Trek strategy games also emerged. The most popular version, in which the player hunts Klingons across the randomly-populated quadrants of the galaxy, originated with Mike Mayfield, an engineer who originally wrote it for a Hewlett-Packard (HP) minicomputer (presumably the one he used at work). DECUS was not the only organization sharing program libraries, and Mayfield’s Star Trek became part of the HP library, from whence it found its way to Ahl, who converted it to BASIC. Other versions followed, such as Bob Leedom’s 1974 Super Star Trek.[30] The practices of the BASIC gaming community made it very easy for gaming lineages to evolve in this way, because every game was distributed textually, as BASIC code. If you were lucky, you got a paper or magnetic tape from which you could automatically read the code into your computer’s memory. If not (if you wanted to try out a game from Ahl’s book, for example), you were in for hours of tedious and error-prone typing. But in either case, you had total access to the raw source code. You could read it, understand it, and modify it. If you wanted to make Ahl’s Star Trek slightly easier, you could modify the phaser subroutine on line 3790 to do more damage. If you were more ambitious, you could go to line 1270 and add a new command to the main menu—make an inspiring speech to the crew, perhaps? A selection of the code listing for Civil War, a simulation game created by high school students in Lexington, Massachusetts in 1968, and included in Ahl’s 101 BASIC Computer Games book. Typing something like this into your own computer required a great deal of patience. [Ahl, 101 Basic Computer Games, 81] Perhaps the most prolific game author of the era, Don Daglow, got hooked on a DEC PDP-10 in 1971 through a time-sharing terminal installed in his dorm at Pomona College, east of Los Angeles. Over the ensuing years he authored his own version of Star Trek, a baseball game, a dungeon-exploration game based on Dungeons & Dragons, and more. His extended career owed to his extended time at Pomona where he had consistent access to the computer: nine years in total as an undergraduate, graduate student, and then instructor.[31] By the early 1970s, many thousands of people like Daglow had discovered the malleable digital world that lived inside of computers. If you could master its rules, it became an infinite erector set, out of which you could reconstruct an ancient long-dead civilization, or fashion a whole galaxy full of hostile Klingons. But unlike Daglow, most of these computer lovers were kept at arm’s length from the object of their desire. Perhaps they could use the university computer at night while they were an undergraduate, but lost that privilege upon graduation a few years later. Perhaps they could afford to rent a few hours of access to a time-sharing service each week, perhaps they could visit a community computing center (like Bob Albrecht’s in Menlo Park), perhaps, like Mike Mayfield, they could cadge a few hours on the office computer for play after hours. But best of all would be a computer at home, to call their own, to use whenever the impulse struck. Out of such longings came the demand for the personal computer. Next time we will look in detail at the story of how that demand was satisfied, and by whom.

Read more